Lessons for security from “Social Networks”

There are a couple of blog posts that I’ve read lately that link together for me, and I’m still working through the reasons why. I’d love your feedback or thoughts.

A blogger by the name of Lhooqtius ov Borg has a long screed on why he doesn’t like the “Social Futilities.” Tyler Cowan has a short on “fake following.”

I think the futility of these systems involves a poor understanding of how people interact. The systems I like and use (LinkedIn, Dopplr) are very purpose specific. I really like how Dopplr doesn’t even bother with a friend concept–feel free to tell me where you’re going, I don’t have to reciprocate. It’s useful because it doesn’t try to replace a real, complex relationship (“friendship”) with a narrowly defined shadow of the world. (In this vein, Austin Hill links a great video in his Facebook in Reality post.)

In information technology, we often replace these rich, nuanced concepts with much more narrow, focused replacements which serve some business purpose. Credit granting has gone from an assessment of the person to an assessment of data about the person to an assessment of the person’s data shadow. There are some benefits to this: race is less of a factor than it was. There are also downsides, as data shadows, blurry things, get confused after fraud. (Speaking of credit scoring, BusinessWeek’s “Your lifestyle may hurt credit score” is not to be missed.)

We’ve replaced the idea of ‘identity’ with ‘account.’ (I’ll once again plug Gelfman’s Presentation of Self for one understanding of how people fluidly and easily manage their personas, and why federated identity will never take off.) Cryptographers model people as Alice and Bob, universal turing machines. But as Adi Shamir says, “If there’s one thing Alice and Bob are not, it’s universal turing machines.” Many people have stopped Understanding Privacy and talk only about identity theft, or, if we’re lucky, about fair information practices.

So the key lesson is that the world is a complex, confusing, emergent and chaotic system. Simplifications all come at a cost. Without an understanding of those costs, we risk creating more security systems as frustrating as those “social networks.”

[Update: It turns out Bruce Schneier has a closely related essay in today's LA Times, "The TSA's useless photo ID rules" in which he talks about the dangers of simplifying identity into intent. Had I seen it earlier, I'd have integrated it in.]

Silver Bullet podcast transcript

silver-bullet-podcast.jpg
I know there’s a lot of people who prefer text to audio. You can skim text much faster. But there are also places where paper or screens are a pain (like on a bus, or while driving). So I’m excited that the Silver Bullet Podcast does both. It’s a huge investment in addressing a variety of use cases.

That all to say you can now read the text of Gary McGraw’s interview of me in PDF form: Adam Shostack on Gary McGraw’s Silver Bullet podcast.

If you missed it, the audio is available at the Silver Bullet site. (Fixed link to point to Silver Bullet.)

Writing a book: The Proposal

To start from the obvious, book publishers are companies, hoping to make money from the books they publish. If you’d like your book to be on this illustrious list, you need an idea for a book that will sell. This post isn’t about how to come up with the idea, it’s about how to sell it.

In a mature market, like the book market, you need some way to convince the publisher that thousands of people will buy your book. Some common ways to do this are to be the first or most comprehensive book on some new technology. You can be the easiest to understand. You can try to become the standard textbook. The big problem with our first proposal was that we wanted to write a book on how managers should make security decisions.

That book didn’t get sold. We might rail against the injustice, or we might accept that publishers know their business better than we do.
Problems with the idea include that there aren’t a whole lot of people who manage security, and managers don’t read a lot of books. (Or so we were told by several publishers.) We didn’t identify a large enough market.

So a proposal for a new book has to do two main things: first identify a market niche that your idea will sell, and second, convince the publisher that you can write. You do that with an outline and a sample chapter. Those are the core bits of a proposal. There are other things, and most publishers have web sites like Addison Wesley’s Write for us or Writing For O’Reilly. Think of each of these as a reason for some mean editor who doesn’t understand you to disqualify your book, and make sure you don’t give them that reason.

With our first proposal, we gave them that reason. Fortunately, both Jessica Goldstein (Addison Wesley) and Carol Long (Wiley) gave us really clear reasons for not wanting our book. We listened, and put some lipstick on our pig of a proposal.

Funny thing is, that lipstick changed our thinking about the book and how we wrote it. For the better.

Writing a book: technical tools & collaboration

When Andrew and I started writing The New School, we both lived in Atlanta, only a few miles apart. We regularly met for beer or coffee to review drafts. After I moved to Seattle, our working process changed a lot. I wanted to talk both about the tools we used, and our writing process.

We started with text editors and a subversion repository. Andrew, I think, used TextEdit, and I used emacs. This didn’t work very well, and we regularly lost check-in discipline. We also realized that we both wanted to be able to use headings, italics, and other tools that aren’t easy in text.

So we moved to LaTex. LaTex is a very powerful, slightly twitchy page description system that scientists use. We wrote the draft chapters we used to sell the book in LaTex, along with the proposal. We really like those drafts, and there’s a good deal which survived, and even more that’s gone. We marked up those chapters in person, which became a lot harder when I took a job in Seattle.

As we tried to work in LaTex, we ran into the same collaboration troubles that Baron Schwartz talked about in “What is it like to write a technical book?“* Lists of comments just didn’t cut it. We needed something more powerful.

Now, there’s a few publishers left who take three formats: LaTeX, Word, and camera-ready. (As I understand it, most only take Word.) So our choice of formats controlled our choice of software. My experience with OpenOffice is that it didn’t produce perfect Office docs. We didn’t want to take a risk that we’d be stuck in a format war with AW. So we moved to Office 2004 for the Mac, and it worked pretty well for writing and revising. Ironically, I was the one who resisted Word most strongly. I’m a real fan of simple file formats that you can read with various tools. We used iChat’s voice chat feature to talk through things, and Andrew flew up to Seattle once for a grueling-long weekend of editing.

That worked pretty well until we hit technical reviews and production. Technical reviews involved sending out the draft to a bunch of people, who then commented on it, usually using Word’s comment feature. I aggregated all those into one file, and started editing it. When we did, we ran into performance problems. A 20 page doc with 300-400 comments and edits was slow.

Fortunately, assimilation has its privileges. I was able to get us into the Office 2008 beta program, which ran almost flawlessly for us. We did the final production edits with Office 2008, ichat and one other key tool: my Brother HL5140 printer. It was a workhorse, and the huge stacks of paper that I worked with all came out of a single cartridge.

*I think that’s the right URL. He has some silly anti-spam software that can’t tell the difference between GET and POST and complains about not having a referer: header on GET.

In the land of the blind..

land-of-the-blind.jpgPCI DSS Position on Patching May Be Unjustified:”

Verizon Business recently posted an excellent article on their blog about security patching. As someone who just read The New School of Information Security (an important book that all information security professionals should read), I thought it was refreshing to see someone take an evidence-based approach to information security controls.

First, thanks Jeff! Second, I was excited by the Verizon report precisely because of what’s now starting to happen. I wrote “Verizon has just catapulted themselves into position as a player who can shape security. That’s because of their willingness to provide data.” Jeff is now using that data to test the PCI standard, and finds that some of its best practices don’t make as much sense the authors of PCI-DSS might have thought.

That’s the good. Verizon gets credibility because Jeff relies on their numbers to make a point. And in this case, I think that Jeff is spot on.

I did want to address something else relating to patching in the Verizon report. Russ Cooper wrote in “Patching Conundrum” on the Verizon Security Blog:

To summarize the findings in our “Control Effectiveness Study”, companies who did a great job of patching (or AV updates) did not have statistically significant less hacking or malicious code experience than companies who said they did an average job of patching or AV updates.

The trouble with this is that the assessment of patching is done by

…[interviewing] the key person responsible for internal security (CSO) in just over 300 companies for which we had already established a multi-year data breach and malcode history. We asked the CSO to rate how well each of dozens of countermeasures were actually deployed in his or her enterprise on a 0 to 5 scale. A score of “zero” meant that the countermeasure was not in use. A score of “5″ meant that the countermeasure was deployed and managed “the best that the CSO could imagine it being deployed in any similar company in the world.” A score of “3″ represented what the CSO considered an average deployment of that particular countermeasure.

So let’s take two CSOs, analytical Alice and boastful Bob. Analytical Alice thinks that her patching program is pretty good. Her organization has strong inventory management, good change control, and rolls out patches well. She listens carefully, and most of her counterparts say similar things. So she gives herself a “3.” Boastful Bob, meanwhile, has exactly the same program in place, but thinks a lot about how hard he’s worked to get those things in place. He can’t imagine anyone having a better process ‘in the real world,’ and so gives himself a 5.

[Update 2: I want to clarify that I didn't mean that Alice and Bob were unaware of their own state, but that they lack data about the state of many other organizations. Without that data, it's hard for them to place themselves comparatively.]

This phenomenon doesn’t just impact CSOs. There’s fairly famous research entitled “Unskilled and Unaware of it,” or “Why the Unskilled Are Unaware:”

Five studies demonstrated that poor performers lack insight into their shortcomings even in real world settings and when given incentives to be accurate. An additional meta-analysis showed that it was lack of insight into their errors (and not mistaken assessments of their peers) that led to overly optimistic social comparison estimates among poor performers.

Now, the Verizon study could have overcome this by carefully defining what a 1-5 meant for patching. Did it? We don’t actually know. To be perfectly fair, there’s not enough information in the report to make a call on that. I hope that they’ll make that more clear in the future.

Candidly, though, I don’t want to get wrapped around the axle on this question. The Verizon study (as Jeff Lowder points out) gives us enough data to take on questions which have been opaque. That’s a huge step forward, and in the land of the blind, it’s impressive what a one-eyed man can accomplish. I’m hopeful that as they’ve opened up, we’ll have more and more data, more critiques of that data. It’s how science advances, and despite some mis-givings about the report, I’m really excited by what it allows us to see.

Photo: “In the land of the blind, the one eyed are king” by nandOOnline, and thanks to Arthur for finding it.

[Updated: cleaned up the transition between the halves of the post.]

Science isn’t about Checklists

Over at Zero in a Bit, Chris Eng has a post, “Art vs. Science“:

A client chastised me once for making a statement that penetration testing is a mixture of art and science. He wanted to believe that it was completely scientific and could be distilled down to a checklist type approach. I explained that while much of it can be done methodically, there is a certain amount of skill and intuition that only comes from practical experience. You learn to recognize that “gut feel” when something is amiss. He became rather incensed and, in effect, told me I was full of it. This customer went on to institute a rigid, mechanical internal process for web app pen testing that was highly inefficient and, ultimately, still relied mostly on a couple bright people on the team who were in tune with both the art and the science.

Certifications only test the science.

I want to disagree strongly. Science isn’t about checklists. It’s about forming and testing hypothesis. In the case of pen tests, you have an overarching hypothesis, “this thing is secure.” You conduct experiments which demonstrate that hypothesis to be false. (Lather, rinse, repeat, you can’t test security in.)

The design of good experiments is an art. Some people are better at it than others. Great science is driven by a small number of great scientists who have both a comprehension that something is wrong with today’s theories, and a flair for great experiments which illuminate those issues.

The problem isn’t science versus art, the problem is checklist and bureaucracy versus skilled professional.

How much work is writing a book?

There’s a great (long) post by Baron Schwartz, “What is it like to write a technical book?” by the lead author of “High Performance MySQL.” There’s a lot of great content about the process and all the but I wanted to respond to this one bit:

I can’t tell you how many times I asked people at O’Reilly to help me understand what would be involved in writing this book. (This is why I’m writing this for you now — in case no one will tell you, either). You would have thought these folks had never helped anyone write a book and had no idea themselves what it entailed. As a result, I had no way to know what was realistic, and of course the schedule was a death march. The deadlines slipped, and slipped and slipped. To November, then December, then February — and ultimately far beyond. Each time the editor told me he thought we were on track to make the schedule. Remember, I didn’t know whether to believe this or not. The amount of work involved shocked me time after time — I thought I saw the light at the end of the tunnel and then discovered it was much farther away than I thought.

I think this is somewhat unfair to the O’Reilly folks, and wanted to comment. Baron obviously put a huge amount of effort into the work, but O’Reilly has no way of knowing that will happen. They run a gamut in second editions from “update the references and commands to the latest revision of the software” to “complete re-write.” Both are legitimate ways to approach it. It could take three months, it could take a few years. O’Reilly can’t know in advance. (Our publisher has told me horror stories about books and what it’s taken to get them out.)

So O’Reilly probably figures that there’s a law of diminishing returns, and pushes an insane schedule as a way of forcing their authors to write what matters and ignore the rest.

So it’s not like a baby that’s gonna take 9 months.


Andrew and I opened the New School of Information Security with a quote from Mark Twain which I think is very relevant: “I didn’t have time to write you a short letter, so I wrote you a long one instead.”

We took our time to write a short book, and Jessica and Karen at Addison-Wesley were great. We went through 2 job changes, a cross-country move, and a whole lot of other stuff in the process. Because we were not technology specific, we had the luxury of time until about December 1st, when Jessica said “hey, if you guys want to be ready for RSA, we need to finish.” From there, it was a little crazy, although not so crazy that we couldn’t hit the deadlines. The biggest pain was our copy-edit. We’d taken the time to copy-edit, and there were too many changes to review them all. If we’d had more time, I would have pushed back and said “reject all, and do it again.”

So there’s no way a publisher can know how long a book will take a new set of authors, because a great deal of the work that Baron Schwartz and co-authors did was their choice.

What’s up with the “New and Used” Pricing on Amazon?

wierd-pricing.jpgSo having a book out, you start to notice all sorts of stuff about how Amazon works. (I’ve confirmed this with other first time authors.) One of the things that I just can’t figure out is the pricing people have for The New School.

There’s a new copy for 46.43. A mere 54% premium over list, and a whopping 234% of Amazon’s discounted price. There’s a used copy for $58.56. What the hell?

This isn’t unique to us. It happens for every book I’ve looked at.

Is this some sort of scheme to hide money from the tax collectors? I mean, I liked Cohen’s book, (incidentally reviewed here) but not to the tune of 600 bucks.

What’s going on? Your thoughts are welcome.

CSO’s FUD Watch

Introducing FUD Watch:”

Most mornings, I start the work day with an inbox full of emails from security vendors or their PR reps about some new malware attack, software flaw or data breach. After some digging, about half turn out to be legitimate issues while the rest – usually the most alarming in tone – turn out to be threats that have little or no impact on the average enterprise.

The big challenge for security writers is to separate the hot air from the legitimate threats. This column aims to do just that.

But for this to work, audience participation is a must.

I’m highly in favor of reducing the FUD. I hope that Bill Brenner’s efforts will help constrain and shame some of the worst of the FUD. However, it won’t go all the way. Bill admits that he’s working from opinion not data. In The New School, we talk about how we need data on how often various problems actually manifest. When we get that data, we won’t need as much audience participation. In the meantime, go mock the FUDsters.

New School Reviews

Don Morrill, IT Toolbox:

If you want to read a book that will have an influence on your information security career, or if you just want to read something that points out that we do need to do information security differently, then you need to go pick up a copy of “The new school of information security” by Adam Shostack and Andrew Stewart.

Amateurs Study Cryptography; Professionals Study Economics:

Adam and his co-author have produced a readable, compact tour of the information security field as it stands today – or perhaps as it lies in its crib. What we know intuitively the authors bring forward thoughtfully in their analysis of the information security industry: it is struggling to keep up with the defects in online communication, data storage, and business processes.

La industria de la seguridad: Vende desde la inseguridad:

Revisando el capítulo 2 titulado “The security industry”, del libro de SHOSTACK y STEWART publicado por Addison Wesley en 2008 denominado The New School of Information Security, se presentan de manera clara y abierta la forma como la industria se da a la tarea de vender la distinción de seguridad de la información, tanto en el tema de productos y servicios, así como en buenas prácticas, listas de chequeo y estándares.

It makes me strangely happy to have our first non-English review.

Finally, Keith Shaw at Network World interviewed me, the podcast is “Why security is failing.”