In the land of the blind..

land-of-the-blind.jpgPCI DSS Position on Patching May Be Unjustified:”

Verizon Business recently posted an excellent article on their blog about security patching. As someone who just read The New School of Information Security (an important book that all information security professionals should read), I thought it was refreshing to see someone take an evidence-based approach to information security controls.

First, thanks Jeff! Second, I was excited by the Verizon report precisely because of what’s now starting to happen. I wrote “Verizon has just catapulted themselves into position as a player who can shape security. That’s because of their willingness to provide data.” Jeff is now using that data to test the PCI standard, and finds that some of its best practices don’t make as much sense the authors of PCI-DSS might have thought.

That’s the good. Verizon gets credibility because Jeff relies on their numbers to make a point. And in this case, I think that Jeff is spot on.

I did want to address something else relating to patching in the Verizon report. Russ Cooper wrote in “Patching Conundrum” on the Verizon Security Blog:

To summarize the findings in our “Control Effectiveness Study”, companies who did a great job of patching (or AV updates) did not have statistically significant less hacking or malicious code experience than companies who said they did an average job of patching or AV updates.

The trouble with this is that the assessment of patching is done by

…[interviewing] the key person responsible for internal security (CSO) in just over 300 companies for which we had already established a multi-year data breach and malcode history. We asked the CSO to rate how well each of dozens of countermeasures were actually deployed in his or her enterprise on a 0 to 5 scale. A score of “zero” meant that the countermeasure was not in use. A score of “5″ meant that the countermeasure was deployed and managed “the best that the CSO could imagine it being deployed in any similar company in the world.” A score of “3″ represented what the CSO considered an average deployment of that particular countermeasure.

So let’s take two CSOs, analytical Alice and boastful Bob. Analytical Alice thinks that her patching program is pretty good. Her organization has strong inventory management, good change control, and rolls out patches well. She listens carefully, and most of her counterparts say similar things. So she gives herself a “3.” Boastful Bob, meanwhile, has exactly the same program in place, but thinks a lot about how hard he’s worked to get those things in place. He can’t imagine anyone having a better process ‘in the real world,’ and so gives himself a 5.

[Update 2: I want to clarify that I didn’t mean that Alice and Bob were unaware of their own state, but that they lack data about the state of many other organizations. Without that data, it’s hard for them to place themselves comparatively.]

This phenomenon doesn’t just impact CSOs. There’s fairly famous research entitled “Unskilled and Unaware of it,” or “Why the Unskilled Are Unaware:”

Five studies demonstrated that poor performers lack insight into their shortcomings even in real world settings and when given incentives to be accurate. An additional meta-analysis showed that it was lack of insight into their errors (and not mistaken assessments of their peers) that led to overly optimistic social comparison estimates among poor performers.

Now, the Verizon study could have overcome this by carefully defining what a 1-5 meant for patching. Did it? We don’t actually know. To be perfectly fair, there’s not enough information in the report to make a call on that. I hope that they’ll make that more clear in the future.

Candidly, though, I don’t want to get wrapped around the axle on this question. The Verizon study (as Jeff Lowder points out) gives us enough data to take on questions which have been opaque. That’s a huge step forward, and in the land of the blind, it’s impressive what a one-eyed man can accomplish. I’m hopeful that as they’ve opened up, we’ll have more and more data, more critiques of that data. It’s how science advances, and despite some mis-givings about the report, I’m really excited by what it allows us to see.

Photo: “In the land of the blind, the one eyed are king” by nandOOnline, and thanks to Arthur for finding it.

[Updated: cleaned up the transition between the halves of the post.]

UK Passport Photos?

UK-Passport-Eye.jpg

2008 and UK passport photos now have the left eye ‘removed’ to be stored on a biometric database by the government. It’s a photo that seems to say more to me about invasion of human rights and privacy than any political speech ever could.

Really? This is a really creepy image. Does anyone know if this is for real, and if so, where we can read more?

Photo: Alan Cleaver2000

Game Theory and Poe

Edgar Allen Poe

Julie Rehmeyer of Science News writes in, “The Tell-Tale Anecdote: An Edgar Allan Poe story reveals a flaw in game theory” about a paper Kfir Elias and Ariel Rubenstein called, “Edgar Allan Poe’s Riddle:
Do Guessers Outperform Misleaders in a Repeated Matching Pennies Game?

The paper discusses a game that Poe describes in The Purloined Letter. In it, the Misleader selects a number of marbles, coins, or whatever (grab them in your hand), and the Guesser guesses if the number is even or odd. Poe opines that it’s a game of skill rather than luck. (Read the article for more detail, or even better, the primary source.)

If you look at it from a simple game-theoretic viewpoint, the Guesser and the Misleader have equal odds. They might as well be flipping coins. However, there is a sense in which it’s a game of skill.

Our intrepid mathematicians showed that in their construction of the game, the guesser has a slight advantage — 3% — which is enough to get Las Vegas interested. They also examined modifications of the game and after several modifications brought it back in line with the predictions of game theory.

This brings up a number of interesting things to think about, including that Poe was on to something ahead of his time, as usual. Funny how that wisdom was hiding in plain sight. I wonder if he planned it.

I’d bet on security prediction markets

In his own blog, Michael Cloppert writes:

Adam, and readers from Emergent Chaos, provided some good feedback on this idea. Even though the general response is that this wouldn’t be a supportable approach, I appreciate the input! This helps me focus my research intentions on the most promising theories and technologies.

I’m glad my readers helped with good feedback, but I think he’s taking the wrong lesson. The lesson should be that there are lots of skeptics, not that the idea won’t work.

And Adam from InklingMarkets has offered to help.)

Haft of the Spear points to an Inkling market, “Group Intel” who are taking bets on bin Laden’s being captured or killed before the end of Bush II. There have only been a few trades with hefty price swings, but why not try it out for infosec? Maybe some chaos would emerge.

(Incidentally, new, interesting comments are still coming in on “Security Prediction Markets: theory & practice.”)

Not quite clear on the subject

The Pirate Bay Logo

Slyck News has a story, “SSL Encrpytion Coming to The Pirate Bay” a good summary of which is in the headline.

However, may not help, and may hurt. Slyck says:

The level of protection offered likely varies on the individual’s geographical location. Since The Pirate Bay isn’t actually situated in Sweden, a user in the United States isn’t impacted by the law. However for the concerned user living in Sweden, the new SSL feature will offer some security against the perceived threat.

No, not really. There are things SSL cannot do and one of those is protect the IP addresses of the two endpoints. If you assume an adversary who is sniffing traffic, they can tell what the two IP addresses are.

There are other things they can do as well. Suppose, for example, they go to the Pirate Bay landing page and observe that it’s 1234 bytes long, and compare that with the size of the SSL transaction you made. If they match in size, then you have a pretty good idea of what the person did.

An attacker that crawled the Pirate Bay site and indexed the sizes of all the objects could construct a map of where people went.

Yes, there will be some uncertainty in it. But there will be less uncertainty than you think. Consider the CDDB database that identifies what CD you just put in a drive. It does nothing more than compare a list of track lengths to known entries, and it’s pretty darned good. So good that music plagiarists were caught by someone who saw a CDDB collision.

If the attacker is only trying to construct probable cause so as to raid someone, it’s likely good enough. “Yer Honor, the suspect may have gone to page X or page Y, but that only means that they’re downloading either X’ or Y.” Yeah, the judge will probably buy it.

SSL is a great technology for protecting content. You don’t care that the attacker knows you bought something, you want to protect your credit card number. It’s not very good at protecting the mere act of communication.

There are many things that can protect, but they have their own set of limitations. It’s too nice a Sunday afternoon for me to go into them.

Science isn’t about Checklists

Over at Zero in a Bit, Chris Eng has a post, “Art vs. Science“:

A client chastised me once for making a statement that penetration testing is a mixture of art and science. He wanted to believe that it was completely scientific and could be distilled down to a checklist type approach. I explained that while much of it can be done methodically, there is a certain amount of skill and intuition that only comes from practical experience. You learn to recognize that “gut feel” when something is amiss. He became rather incensed and, in effect, told me I was full of it. This customer went on to institute a rigid, mechanical internal process for web app pen testing that was highly inefficient and, ultimately, still relied mostly on a couple bright people on the team who were in tune with both the art and the science.

Certifications only test the science.

I want to disagree strongly. Science isn’t about checklists. It’s about forming and testing hypothesis. In the case of pen tests, you have an overarching hypothesis, “this thing is secure.” You conduct experiments which demonstrate that hypothesis to be false. (Lather, rinse, repeat, you can’t test security in.)

The design of good experiments is an art. Some people are better at it than others. Great science is driven by a small number of great scientists who have both a comprehension that something is wrong with today’s theories, and a flair for great experiments which illuminate those issues.

The problem isn’t science versus art, the problem is checklist and bureaucracy versus skilled professional.

Medeco Embraces The Locksport Community

Two days ago, Marc Weber Tobias pointed out that Medeco, the 800 pound gorilla in the high-security lock market, recently published an open letter to the locksport community, welcoming it to the physical security industry:

While we have worked with many locksmiths and security specialists in the past to improve our
cylinders, this is the first time that we have worked with people in the sport-lock picking community.
I am pleased to know that you have as much concern for the security of the public as those of us in
the lock industry. Again, I welcome you as representatives of the sport-lock picking community, to
the lock industry, and hope that together we can continue to improve the security and safety that
locks provide to the world.

This is really exciting. For the past few years, I’ve watched as Matt Blaze and others applied information security principles to physical security and the resulting kerfuffles that so closely resembled the disclosure debates in our own space over the last ten years. As a result, it’s particularly exciting to see stuff like this coming from the physical security space.
Marc Weber Tobias has a great analysis of this letter as well as a very worthwhile discussion of ethics. Do go read it. The parallels between this and our own industry are very revealing…

R-E-S-E-P-C-T! Find out what it means to me

TSA-authority.jpg

The TSA apparently is issuing itself badges in its continuing search for authority.

The attire aims to convey an image of authority to passengers, who have harassed, pushed and in a few instances punched screeners. “Some of our officers aren’t respected,” TSA spokeswoman Ellen Howe said.

A.J. Castilla, a screener at Boston’s Logan Airport and a spokesman for a screeners union, is eager to get a badge. “It’ll go a long way to enhance the respect of this workforce,” he said. (“TSA’s Badges Are a Sore Spot With Cops,” USA Today)

See, the problem isn’t that the American people are unwilling to respect to support you, it’s that you don’t respect us. And respect is a two way street. TSA humiliates people. They intrude. They touch people’s privates. They want you to pack your toiletries in a baggie, take off your shoes, and submit to millimeter wave scanning. All the while, they’re no more effective than their predecessors.

You want respect? Earn it. Respect those around you, and those you’re supposed to serve. Tin-plate badges make you look like you’re desperate.

I suppose there’s a reason for that.

Intelligence maven Haft of the Spear has “How you dress has nothing to do with your effectiveness:”

I think this is a bad idea not because I think Screeners don’t deserve respect; I’m against it because its “cop-creep.”

Identity Theft is more than Fraud By Impersonation

gossip.jpgIn “The Pros and Cons of LifeLock,” Bruce Schneier writes:

In reality, forcing lenders to verify identity before issuing credit is exactly the sort of thing we need to do to fight identity theft. Basically, there are two ways to deal with identity theft: Make personal information harder to steal, and make stolen personal information harder to use. We all know the former doesn’t work, so that leaves the latter. If Congress wanted to solve the problem for real, one of the things it would do is make fraud alerts permanent for everybody. But the credit industry’s lobbyists would never allow that.

There’s a type of security expert who likes to sigh and assert that ID theft is simply a clever name for impersonation. I used to be one of them. More recently, I’ve found that it often leads to incorrect or incomplete thinking like the above.

The real problem of ID theft is not the impersonation: the bank eats that, although we pay eventually. The real problem is that one’s “good name” is now controlled by the credit bureaus. The pain of ID theft is not that you have to deal with one bad loan, it’s how the claims about that bad loan haunt you through a shadowy network of unaccountable bureaucracies who libel you for years, and treat you like a liar when you try to clear up the problem.

So there’s a third way to deal with identity theft: make the various reporting agencies responsible for their words and the impact of those words. Align the law and their responsibilities with the reality of how their services are used.

I’ve talked about this before, in “The real problem in ID theft,” and Mordaxus has talked about “What Congress Can Do To Prevent Identity Theft.”

How much work is writing a book?

There’s a great (long) post by Baron Schwartz, “What is it like to write a technical book?” by the lead author of “High Performance MySQL.” There’s a lot of great content about the process and all the but I wanted to respond to this one bit:

I can’t tell you how many times I asked people at O’Reilly to help me understand what would be involved in writing this book. (This is why I’m writing this for you now — in case no one will tell you, either). You would have thought these folks had never helped anyone write a book and had no idea themselves what it entailed. As a result, I had no way to know what was realistic, and of course the schedule was a death march. The deadlines slipped, and slipped and slipped. To November, then December, then February — and ultimately far beyond. Each time the editor told me he thought we were on track to make the schedule. Remember, I didn’t know whether to believe this or not. The amount of work involved shocked me time after time — I thought I saw the light at the end of the tunnel and then discovered it was much farther away than I thought.

I think this is somewhat unfair to the O’Reilly folks, and wanted to comment. Baron obviously put a huge amount of effort into the work, but O’Reilly has no way of knowing that will happen. They run a gamut in second editions from “update the references and commands to the latest revision of the software” to “complete re-write.” Both are legitimate ways to approach it. It could take three months, it could take a few years. O’Reilly can’t know in advance. (Our publisher has told me horror stories about books and what it’s taken to get them out.)

So O’Reilly probably figures that there’s a law of diminishing returns, and pushes an insane schedule as a way of forcing their authors to write what matters and ignore the rest.

So it’s not like a baby that’s gonna take 9 months.


Andrew and I opened the New School of Information Security with a quote from Mark Twain which I think is very relevant: “I didn’t have time to write you a short letter, so I wrote you a long one instead.”

We took our time to write a short book, and Jessica and Karen at Addison-Wesley were great. We went through 2 job changes, a cross-country move, and a whole lot of other stuff in the process. Because we were not technology specific, we had the luxury of time until about December 1st, when Jessica said “hey, if you guys want to be ready for RSA, we need to finish.” From there, it was a little crazy, although not so crazy that we couldn’t hit the deadlines. The biggest pain was our copy-edit. We’d taken the time to copy-edit, and there were too many changes to review them all. If we’d had more time, I would have pushed back and said “reject all, and do it again.”

So there’s no way a publisher can know how long a book will take a new set of authors, because a great deal of the work that Baron Schwartz and co-authors did was their choice.