In the land of the blind..

land-of-the-blind.jpgPCI DSS Position on Patching May Be Unjustified:”

Verizon Business recently posted an excellent article on their blog about security patching. As someone who just read The New School of Information Security (an important book that all information security professionals should read), I thought it was refreshing to see someone take an evidence-based approach to information security controls.

First, thanks Jeff! Second, I was excited by the Verizon report precisely because of what’s now starting to happen. I wrote “Verizon has just catapulted themselves into position as a player who can shape security. That’s because of their willingness to provide data.” Jeff is now using that data to test the PCI standard, and finds that some of its best practices don’t make as much sense the authors of PCI-DSS might have thought.

That’s the good. Verizon gets credibility because Jeff relies on their numbers to make a point. And in this case, I think that Jeff is spot on.

I did want to address something else relating to patching in the Verizon report. Russ Cooper wrote in “Patching Conundrum” on the Verizon Security Blog:

To summarize the findings in our “Control Effectiveness Study”, companies who did a great job of patching (or AV updates) did not have statistically significant less hacking or malicious code experience than companies who said they did an average job of patching or AV updates.

The trouble with this is that the assessment of patching is done by

…[interviewing] the key person responsible for internal security (CSO) in just over 300 companies for which we had already established a multi-year data breach and malcode history. We asked the CSO to rate how well each of dozens of countermeasures were actually deployed in his or her enterprise on a 0 to 5 scale. A score of “zero” meant that the countermeasure was not in use. A score of “5″ meant that the countermeasure was deployed and managed “the best that the CSO could imagine it being deployed in any similar company in the world.” A score of “3″ represented what the CSO considered an average deployment of that particular countermeasure.

So let’s take two CSOs, analytical Alice and boastful Bob. Analytical Alice thinks that her patching program is pretty good. Her organization has strong inventory management, good change control, and rolls out patches well. She listens carefully, and most of her counterparts say similar things. So she gives herself a “3.” Boastful Bob, meanwhile, has exactly the same program in place, but thinks a lot about how hard he’s worked to get those things in place. He can’t imagine anyone having a better process ‘in the real world,’ and so gives himself a 5.

[Update 2: I want to clarify that I didn't mean that Alice and Bob were unaware of their own state, but that they lack data about the state of many other organizations. Without that data, it's hard for them to place themselves comparatively.]

This phenomenon doesn’t just impact CSOs. There’s fairly famous research entitled “Unskilled and Unaware of it,” or “Why the Unskilled Are Unaware:”

Five studies demonstrated that poor performers lack insight into their shortcomings even in real world settings and when given incentives to be accurate. An additional meta-analysis showed that it was lack of insight into their errors (and not mistaken assessments of their peers) that led to overly optimistic social comparison estimates among poor performers.

Now, the Verizon study could have overcome this by carefully defining what a 1-5 meant for patching. Did it? We don’t actually know. To be perfectly fair, there’s not enough information in the report to make a call on that. I hope that they’ll make that more clear in the future.

Candidly, though, I don’t want to get wrapped around the axle on this question. The Verizon study (as Jeff Lowder points out) gives us enough data to take on questions which have been opaque. That’s a huge step forward, and in the land of the blind, it’s impressive what a one-eyed man can accomplish. I’m hopeful that as they’ve opened up, we’ll have more and more data, more critiques of that data. It’s how science advances, and despite some mis-givings about the report, I’m really excited by what it allows us to see.

Photo: “In the land of the blind, the one eyed are king” by nandOOnline, and thanks to Arthur for finding it.

[Updated: cleaned up the transition between the halves of the post.]

UK Passport Photos?

UK-Passport-Eye.jpg

2008 and UK passport photos now have the left eye ‘removed’ to be stored on a biometric database by the government. It’s a photo that seems to say more to me about invasion of human rights and privacy than any political speech ever could.

Really? This is a really creepy image. Does anyone know if this is for real, and if so, where we can read more?

Photo: Alan Cleaver2000

Game Theory and Poe

Edgar Allen Poe

Julie Rehmeyer of Science News writes in, “The Tell-Tale Anecdote: An Edgar Allan Poe story reveals a flaw in game theory” about a paper Kfir Elias and Ariel Rubenstein called, “Edgar Allan Poe’s Riddle:
Do Guessers Outperform Misleaders in a Repeated Matching Pennies Game?

The paper discusses a game that Poe describes in The Purloined Letter. In it, the Misleader selects a number of marbles, coins, or whatever (grab them in your hand), and the Guesser guesses if the number is even or odd. Poe opines that it’s a game of skill rather than luck. (Read the article for more detail, or even better, the primary source.)

If you look at it from a simple game-theoretic viewpoint, the Guesser and the Misleader have equal odds. They might as well be flipping coins. However, there is a sense in which it’s a game of skill.

Our intrepid mathematicians showed that in their construction of the game, the guesser has a slight advantage — 3% — which is enough to get Las Vegas interested. They also examined modifications of the game and after several modifications brought it back in line with the predictions of game theory.

This brings up a number of interesting things to think about, including that Poe was on to something ahead of his time, as usual. Funny how that wisdom was hiding in plain sight. I wonder if he planned it.

I’d bet on security prediction markets

In his own blog, Michael Cloppert writes:

Adam, and readers from Emergent Chaos, provided some good feedback on this idea. Even though the general response is that this wouldn’t be a supportable approach, I appreciate the input! This helps me focus my research intentions on the most promising theories and technologies.

I’m glad my readers helped with good feedback, but I think he’s taking the wrong lesson. The lesson should be that there are lots of skeptics, not that the idea won’t work.

And Adam from InklingMarkets has offered to help.)

Haft of the Spear points to an Inkling market, “Group Intel” who are taking bets on bin Laden’s being captured or killed before the end of Bush II. There have only been a few trades with hefty price swings, but why not try it out for infosec? Maybe some chaos would emerge.

(Incidentally, new, interesting comments are still coming in on “Security Prediction Markets: theory & practice.”)

Not quite clear on the subject

The Pirate Bay Logo

Slyck News has a story, “SSL Encrpytion Coming to The Pirate Bay” a good summary of which is in the headline.

However, may not help, and may hurt. Slyck says:

The level of protection offered likely varies on the individual’s geographical location. Since The Pirate Bay isn’t actually situated in Sweden, a user in the United States isn’t impacted by the law. However for the concerned user living in Sweden, the new SSL feature will offer some security against the perceived threat.

No, not really. There are things SSL cannot do and one of those is protect the IP addresses of the two endpoints. If you assume an adversary who is sniffing traffic, they can tell what the two IP addresses are.

There are other things they can do as well. Suppose, for example, they go to the Pirate Bay landing page and observe that it’s 1234 bytes long, and compare that with the size of the SSL transaction you made. If they match in size, then you have a pretty good idea of what the person did.

An attacker that crawled the Pirate Bay site and indexed the sizes of all the objects could construct a map of where people went.

Yes, there will be some uncertainty in it. But there will be less uncertainty than you think. Consider the CDDB database that identifies what CD you just put in a drive. It does nothing more than compare a list of track lengths to known entries, and it’s pretty darned good. So good that music plagiarists were caught by someone who saw a CDDB collision.

If the attacker is only trying to construct probable cause so as to raid someone, it’s likely good enough. “Yer Honor, the suspect may have gone to page X or page Y, but that only means that they’re downloading either X’ or Y.” Yeah, the judge will probably buy it.

SSL is a great technology for protecting content. You don’t care that the attacker knows you bought something, you want to protect your credit card number. It’s not very good at protecting the mere act of communication.

There are many things that can protect, but they have their own set of limitations. It’s too nice a Sunday afternoon for me to go into them.

Science isn’t about Checklists

Over at Zero in a Bit, Chris Eng has a post, “Art vs. Science“:

A client chastised me once for making a statement that penetration testing is a mixture of art and science. He wanted to believe that it was completely scientific and could be distilled down to a checklist type approach. I explained that while much of it can be done methodically, there is a certain amount of skill and intuition that only comes from practical experience. You learn to recognize that “gut feel” when something is amiss. He became rather incensed and, in effect, told me I was full of it. This customer went on to institute a rigid, mechanical internal process for web app pen testing that was highly inefficient and, ultimately, still relied mostly on a couple bright people on the team who were in tune with both the art and the science.

Certifications only test the science.

I want to disagree strongly. Science isn’t about checklists. It’s about forming and testing hypothesis. In the case of pen tests, you have an overarching hypothesis, “this thing is secure.” You conduct experiments which demonstrate that hypothesis to be false. (Lather, rinse, repeat, you can’t test security in.)

The design of good experiments is an art. Some people are better at it than others. Great science is driven by a small number of great scientists who have both a comprehension that something is wrong with today’s theories, and a flair for great experiments which illuminate those issues.

The problem isn’t science versus art, the problem is checklist and bureaucracy versus skilled professional.