Survey for How to Measure Anything In Cybersecurity Risk

This is a survey from Doug Hubbard, author of How To Measure Anything and he is
currently writing another book with Richard Seiersen (GM of Cyber Security at
GE Healthcare) titled How to Measure Anything in Cybersecurity Risk. As part of
the research for this book, they are asking for your assistance as an
information security professional by taking the following survey. It
is estimated to take 10 to 25 minutes to complete. In return for participating
in this survey, you will be invited to attend a webinar for a sneak-peek on the
book’s content. You will also be given a summary report of this survey. The
survey also includes requests for feedback on the survey itself.

https://www.surveymonkey.com/r/VMHDQ7N

What Good is Threat Intelligence Going to do Against That?

As you may be aware, I’m a fan of using Star Wars for security lessons, such as threat modeling or Saltzer and Schroeder. So I was pretty excited to see Wade Baker post “Luke in the Sky with Diamonds,” talking about threat intelligence, and he gets bonus points for crossover title. And I think it’s important that we see to fixing a hole in their argument.

So…Pardon me for asking, but what good is threat intelligence going to do against that?

In many ways, the diamond that Wade’s built shows a good understanding of the incident. (It may focus overmuch on Jedi Panda, to the practical exclusion of R2-D2, who we all know is the driving force through the movies.) The facts are laid out, they’re organized using the model, and all is well.


Most of my issues boil down to two questions. The first is how could any analysis of the Battle of Yavin fail to mention the crucial role played by Obi Wan Kenobi, and second, what the heck do you do with the data? (And a third, about the Diamond Model itself — how does this model work? Why is a lightsaber a capability, and an X-Wing a bit of infrastructure? Why is The Force counted as a capability, not an adversary to the Dark Side?)

To the first question, that of General Kenobi. As everyone knows, General Kenobi had infiltrated and sabotaged the Death Star that very day. The public breach reports state that “a sophisticated actor” was only able to sabotage a tractor beam controller before being caught, but how do we know that’s all he did? He was on board the station for hours, and could easily have disabled tractor beams that worked in the trenches, or other defenses that have not been revealed. We know that his associate, Yoda, was able to see into the future. We have to assume that they used this ability, and, in using it, created for themselves a set of potential outcomes, only one of which is modeled.

The second question is, okay, we have a model of what went wrong, and what do we do with it? The Death Star has been destroyed, what does all that modeling tell us about the Jedi Panda? About the Fortressa? (Which, I’ll note, is mentioned as infrastructure, but not in the textual analysis.) How do we turn data into action?

Depending on where you stand, it appears that Wade falls into several traps in this post. They are:

  • Adversary modeling and missing something. The analysis misses Ben Kenobi, and it barely touches on the fact that the Rebel Alliance exists. Getting all personal might lead an Imperial Commander to be overly focused on Skywalker, and miss the threat from Lando Calrissian, or other actors, to a second Death Star. Another element which is missed is the relationship between Vader and Skywalker. And while I don’t want to get choked for this, there’s a real issue that the Empire doesn’t handle failure well.
  • Hindsight biases are common — so common that the military has a phenomenon it calls ‘fighting the last war.’ This analysis focuses in on a limited set of actions, the ones which succeeded, but it’s not clear that they’re the ones most worth focusing on.
  • Actionability. This is a real problem for a lot of organizations which get interesting information, but do not yet have the organizational discipline to integrate it into operations effectively.

The issues here are not new. I discussed them in “Modeling Attackers and their Motives,” and I’ll quote myself to close:

Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.

What I don’t know about the Diamond Model is how it does a better job at avoiding the traps and helping those who use it do better than other models. (I’m not saying it’s poor, I’m saying I don’t know and would like to see some empirical work on the subject.)

What Happened At OPM?

I want to discuss some elements of the OPM breach and what we know and what we don’t. Before I do, I want to acknowledge the tremendous and justified distress that those who’ve filled out the SF-86 form are experiencing. I also want to acknowledge the tremendous concern that those who employ those with clearances must be feeling. The form is designed as an (inverted) roadmap to suborning people, and now all that data is in the hands of a foreign intelligence service.

The National Journal published A Timeline of Government Data Breaches:
OPM Data Breach

I asked after the root cause, and Rich Bejtlich responded “The root cause is a focus on locking doors and windows while intruders are still in the house” with a pointer to his “Continuous Diagnostic Monitoring Does Not Detect Hackers.”

And while I agree with Richard’s point in that post, I don’t think that’s the root cause. When I think about root cause, I think about approaches like Five Whys or Ishikawa. If we apply this sort of approach then we can ask, “Why were foreigners able to download the OPM database?” There are numerous paths that we might take, for example:

  1. Because of a lack of two-factor authentication (2FA)
  2. Why? Because some critical systems at OPM don’t support 2FA.
  3. Why? Because of a lack of budget for upgrades & testing (etc)

Alternately, we might go down a variety of paths based on the Inspector General Report. We might consider Richard’s point:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because someone there knows how to lock doors and windows.
  3. Why? Because lots of organizations hire out of government agencies.
  4. Why? Because they pay better
  5. [Alternate] Employees don’t like the clearance process

But we can go down alternate paths:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because finding intruders in the house is hard, and people often miss those stealthy attackers.
  3. Why? Because networks are chaotic and change frequently
  4. [Alternate] Because not enough people publish lists of IoCs, so defenders don’t know what to look for.

What I’d really like to see are specific technical facts laid out. (Or heck, if the facts are unknowable because logs rotated or attackers deleted them, or we don’t even know why we can’t know, let’s talk about that, and learn from it.)

OPM and Katherine Archuleta have already been penalized. Let’s learn things beyond dates. Let’s put the facts out there, or, as I quoted in my last post we “should declare the causes which impel them to the separation”, or “let Facts be submitted to a candid world.” Once we have facts about the causes, we can perform deeper root cause analysis.

I don’t think that the OIG report contains those causes. Each of those audit failings might play one of several roles. The failing might have been causal, and fixing it would have stopped the attack. The failing might have been casual and the attacker would have worked around it. The failing might be irrelevant (for example, I’ve rarely seen an authorization to operate prevent an attack, unless you fold it up very small and glue it into a USB port). The failings might even have been distracting, taking attention away from other work that might have prevented the attack.


A collection of public facts would enable us to have a discussion about those possibilities. We could have a meta-conversation about those categorizations of failings, and if there’s other ones which make more sense.

Alternately, we can keep going the way we’re going.

So. What happened at OPM?

P0wned! Don’t make the same mistake I did

I fell victim to an interesting attack, which I am recounting here so that others may avoid it.

In a nutshell, I fell victim to a trojan, which the malefactor was able to place in a trusted location in my search path. A wrapper obscured the malicious payload. Additionally, a second line of defense did not catch the substitution. I believe the attackers were not out to harm me, but that this trojan was put in place partially for lulz, and partially to allow a more-important attack on the systems RBAC mechanisms to succeed.

Attack Details

I was attempting to purchase a six pack of New Belgium Rampant IPA, shown immediately below.

IMG_2242.JPG

I obtained the six pack from the canonical location in the system – a reach-in refrigerator in the supermarket’s liquor aisle. I proceeded to the cashier, who rang up my purchase, bagged it, and accepted payment.

I realized upon arrival home, that this was a trojan six pack, as seen below:

IMG_2243.JPG

Clearly, the attacker to care to make his payload look legitimate. What I noticed later, was the subtle difference I zoom in on below

IMG_2245.JPG

:

IMG_2244.JPG

Yes, the attacker had substituted root beer for real beer.

Needless to say, this was a devious denial of service, which the perpetrators undoubtedly laughed about. However, this was likely not just “for the lulz”. I think this was the work of juvenile attackers, whose motives were to defeat the RBAC (real beer access control) system. Knowing that a purchase of real beer would be scrutinized closely, I believe they exfiltrated the target beer by hiding it in a root beer package.

Mitigations put in place by the system did not catch this error – the cashier/reference monitor allowed the purchase (and likely, the offsetting real beer as root beer purchase).

Possible Countermeasures

The keys to this attack were that the trojan was in the right place in the search path, and that it appeared legitimate. Obviously, this location must be readable by all, since items need to be fetched from it. However, allowing items to be placed in it by untrusted users is a definite risk. Technical constraints make the obvious countermeasure — allowing only privileged stocking, while permitting “world” fetching — presents serious usability concerns, and increases system cost, since the privileged stocker must be paid.

Nonetheless, such countermeasures are in place for certain other items, notably where the cost to the system — as opposed to the user — of an illicit item substitution is quite high.

Lessons learned

Ultimately, system usability and cost tradeoffs put the onus on the end-user. Before taking a non-idempotent step, inspect the objects closely!

A Mini-Review of “The Practice of Network Security Monitoring”

NSM book coverRecently the kind folks at No Starch Press sent me a review copy of Rich Bejtlich’s newest book The Practice of Network Security Monitoring and I can’t recommend it enough. It is well worth reading from a theory perspective, but where it really shines is digging into the nuts and bolts of building an NSM program from the ground up. He has essentially built a full end to end tutorial on a broad variety of tools (especially Open Source ones) that will help with every aspect of the program, from collection to analysis to reporting.

As someone who used to own security monitoring and incident response for various organizations, the book was a great refresher on the why and wherefores of building an NSM program and it was really interesting to see how much the tools have evolved over the last 10 years or so since I was in the trenches with the bits and bytes. This is a great resource though regardless of your level of experience and will be a great reference work for years to come. Go read it…

A flame about flame

CNET ran a truly ridiculous article last week titled
“Flame can sabotage computers by deleting files, says Symantec”. And if that’s not goofy enough, the post opens with

The virus can not only steal data but disrupt computers by removing critical files, says a Symantec researcher.

ZOMG! A virus that deletes files! Now that is cutting edge technology! It’s shit articles like this that reifies the belief that the security industry in general and the AV industry in specific is filled with people who are completely out of touch with the rest of the world.

“These guys have the capability to delete everything on the computer,” Thakur said, according to Reuters. “This is not something that is theoretical. It is absolutely there.”

ProTip to Symantec and Reuters, viruses have been doing this since at least the 80s. Are you really that desperate for yet another story that this is the level that this is the sort of thing you feel is worthy of a press release and news article. How about you save that time and effort and instead focus on making a product that works better.

Shocking News of the Day: Social Security Numbers Suck

The firm’s annual Banking Identity Safety Scorecard looked at the consumer-security practices of 25 large banks and credit unions. It found that far too many still rely on customers’ Social Security numbers for authentication purposes — for instance, to verify a customer’s identity when he or she wants to speak to a bank representative over the telephone or re-set a password.

All banks in the report used some version of the Social Security number as a means of authenticating the customer, Javelin found. The pervasive use of Social Security numbers was surprising, given the importance of Social Security numbers as a tool for identity theft, said Phil Blank, managing director of security, risk and fraud at Javelin. (“Banks Rely Too Heavily On Social Security Numbers, Report Finds“, Ann Carrns, New York Times)

Previously here: “Social Security Numbers are Worthless as Authenticators” (2009), or “Bad advice on SSNs” (2005).

Emergent Chaos endorses Wim Remes for ISC(2) Board

Today, we are sticking our noses in a place about which we know fairly little: the ISC(2) elections. We’re endorsing a guy we don’t know, Wim Remes, to shake stuff up. Because, really, we ought to care about the biggest and oldest certification in security, but hey, we don’t. And really, that’s a bit of a problem. And it seems that Wim wants to make things better. And so we’re encouraging all four of our CISSP-holding readers to go vote for him, because we think that a whole lotta shaking going on would be, at worst, a not-bad thing.

How’s that for a heartfelt endorsement?

Ok, more seriously. ISC(2) offers up a certification in information security. There’s a big infosec community that doesn’t take that certification very seriously. That’s a problem that I’ve never had a motivation to try to solve, but Wim does, and I wish him the very best of luck. I think that that CISSP could do substantially better, and the first phase of that is to elect some outsiders to communicate a message that change is needed. What’s more, Wim is not a joke candidate, and he’s campaigned effectively for the role, getting lots of endorsements from people who are both worth listening to and who take this seriously enough that they wouldn’t open with a jokey lead.

And so Emergent Chaos is endorsing Wim, and hoping that some chaos and other worthwhile things start to emerge. You can read his statement on Jimmy Blake’s blog, and vote here.

Emergent Map: Streets of the US

This is really cool. All Streets is a map of the United States made of nothing but roads. A surprisingly accurate map of the country emerges from the chaos of our roads:

Allstreets poster

All Streets consists of 240 million individual road segments. No other features — no outlines, cities, or types of terrain — are marked, yet canyons and mountains emerge as the roads course around them, and sparser webs of road mark less populated areas. More details can be found here, with additional discussion of the previous version here.

In the discussion page, “Fry” writes:

The result is a map made of 240 million segments of road. It’s very difficult to say exactly how many individual streets are involved — since a winding road might consist of dozens or even hundreds of segments — but I’m sure there’s someone deep inside the Census Bureau who knows the exact number.

Which raises a fascinating question: is there a Platonic definition of “a road”? Is the question answerable in the sort of concrete way that I can say “there are 2 pens in my hand”? We tend to believe that things are countable, but as you try to count them in larger scales, the question of what is a discrete thing grows in importance. We see this when map software tells us to “continue on Foo Street.” Most drivers don’t care about such instructions; the road is the same road, insofar as you can drive in a straight line and be on what seems the same “stretch of pavement.” All that differs is the signs (if there are signs). There’s a story that when Bostonians named Washington Street after our first President, they changed the names of all the streets as they cross Washington Street, to draw attention to the great man. Are those different streets? They are likely different segments, but I think that for someone to know the number of streets in the US requires not an ontological analysis of the nature of street, but rather a purpose-driven one. Who needs to know how many individual streets are in the US? What would they do with that knowledge? Will they count gravel roads? What about new roads, under construction, or roads in the process of being torn up? This weekend of “carmageddeon” closing of 405 in LA, does 405 count as a road?

Only with these questions answered could someone answer the question of “how many streets are there?” People often steam-roller over such issues to get to answers when they need them, and that may be ok, depending on what details are flattened. Me, I’ll stick with “a great many,” since it is accurate enough for all my purposes.

So the takeaway for you? Well, there’s two. First, even with the seemingly most concrete of questions, definitions matter a lot. When someone gives you big numbers and the influence behavior, be sure to understand what they measured and how, and what decisions they made along the way. In information security, a great many people announce seemingly precise and often scary-sounding numbers that, on investigation, mean far different things than they seem to. (Or, more often, far less.)

And second, despite what I wrote above, it’s not the whole country that emerges. It’s the contiguous 48. Again, watch those definitions, especially for what’s not there.

Previously on Emergent Chaos: Steve Coast’s “Map of London” and “Map of Where Tourists Take Pictures.”

Is iTunes 10.3.1 a security update?

Dear Apple,

In the software update, you tell us that we should see http://support.apple.com/kb/HT1222 for the security content of this update:

Itunes10 3 1

However, on visiting http://support.apple.com/kb/HT1222, and searching for “10.3”, the phrase doesn’t appear. Does that imply that there’s no security content? Does it mean there is security content but you’re not telling us about it?

Really, I don’t feel like thinking about the latest terms of service today if I don’t have to. I’d prefer not to get your latest features which let you sell more and bundle in your latest ideas about what a music player ought to do. But I’m scared. And so I’d like to ask: Is there security content in iTunes 10.3.1?