The Evolution of Secure Things

One of the most interesting security books I’ve read in a while barely mentions computers or security. The book is Petroski’s The Evolution of Useful Things.

Evolution Of useful Things Book Cover

As the subtitle explains, the book discusses “How Everyday Artifacts – From Forks and Pins to Paper Clips and Zippers – Came to be as They are.”

The chapter on the fork is a fine example of the construction of the book.. The book traces its evolution from a two-tined tool useful for holding meat as it was cut to the 4 tines we have today. Petroski documents the many variants of forks which were created, and how each was created with reference to the perceived failings of previous designs. The first designs were useful for holding meat as you cut it, before transferring it to your mouth with the knife. Later designs were unable to hold peas, extract an oyster, cut pastry, or meet a variety of other goals that diners had. Those goals acted as evolutionary pressures, and drove innovators to create new forms of the fork.

Not speaking of the fork, but rather of newer devices, Petroski writes:

Why designers do not get things right the first time may be more understandable than excusable. Whether electronics designers pay less attention to how their devices will be operated, or whether their familiarity with the electronic guts of their own little monsters hardens them against these monsters’ facial expressions, there is a consensus among consumers and reflective critics like Donald Norman, who has characterized “usable design” as the “next competitive frontier,” that things seldom live up to their promise. Norman states flatly, “Warning labels and large instruction manuals are signs of failures, attempts to patch up problems that should have been avoided by proper design in the first place.” He is correct, of course, but how is it that designers have, almost to a person, been so myopic?

So what does this have to do with security?

(No, it’s not “stick a fork in it, it’s done fer.”)

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

In security, we ask for perfection against an ill-defined and ever-growing list of hard-to-understand properties, such as “double-free safety.”

Computer security is in a process of moving from expressing “security” to expressing more precise goals, and the evolution of useful tools for finding, naming, and discussing vulnerabilities will help us express what we want in secure software.

The various manifestations of failure, as have been articulated in case studies throughout this book, provide the conceptual underpinning for understanding the evolving form of artifacts and the fabric of technology into which they are inextricably woven. It is clearly the perception of failure in existing technology that drives inventors, designers, and engineers to modify what others may find perfectly adequate, or at least usable. What constitutes failure and what improvement is not totally objective, for in the final analysis a considerable list of criteria, ranging from the functional to the aesthetic, from the economic to the moral, can come into play. Nevertheless, each criterion must be judged in a context of failure, which, though perhaps much easier than success to quantify, will always retain an aspect of subjectivity. The spectrum of subjectivity may appear to narrow to a band of objectivity within the confines of disciplinary discussion, but when a diversity of individuals and groups comes together to discuss criteria of success and failure, consensus can be an elusive state.

Even if you’ve previously read it, re-reading it from a infosec perspective is worthwhile. Highly recommended.

[As I was writing this, Ben Hughes wrote a closely related post on the practical importance of tradeoffs, “A Dockery of a Sham.”]

Phishing and Clearances

Apparently, the CISO of US Homeland Security, a Paul Beckman, said that:

“Someone who fails every single phishing campaign in the world should not be holding a TS SCI [top secret, sensitive compartmentalized information—the highest level of security clearance] with the federal government” (Paul Beckman, quoted in Ars technica)

Now, I’m sure being in the government and trying to defend against phishing attacks is a hard problem, and I don’t want to ignore that real frustration. At the same time, GAO found that the government is having trouble hiring cybersecurity experts, and that was before the SF-86 leak.

Removing people’s clearances is one repsonse. It’s not clear from the text if these are phishing (strictly defined, an attempt to get usernames and passwords), or malware attached to the emails.

In each case, there are other fixes. The first would be multi-factor authentication for government logins. This was the subject of a push, and if agencies aren’t at 100%, maybe getting there is better than punitive action. Another fix could be to use an email client which makes seeing phishing emails easier. For example, an email client could display the RFC-822 sender address (eg, “<>” for any email address that that email client hasn’t sent email to, rather than the friendly text. They could provide password management software with built-in anti-phishing (checking the domain before submitting the password. They could, I presume, do other things which minimize the request on the human being.

When Rob Reeder, Ellen Cram Kowalczyk and I created the “NEAT” guidance for usable security, we didn’t make “Necessary” first just because the acronym is neater that way, we put it first because the poor person is usually overwhelmed, and they deserve to have software make the decisions that software can make. Angela Sasse called this the ‘compliance budget,’ and it’s not a departmental budget, it’s a human one. My understanding is that those who work for the government already have enough things drawing on that budget. Making people anxious that they’ll lose their clearance and have to take a higher-paying private sector job should not be one of them.

Survey for How to Measure Anything In Cybersecurity Risk

This is a survey from Doug Hubbard, author of How To Measure Anything and he is
currently writing another book with Richard Seiersen (GM of Cyber Security at
GE Healthcare) titled How to Measure Anything in Cybersecurity Risk. As part of
the research for this book, they are asking for your assistance as an
information security professional by taking the following survey. It
is estimated to take 10 to 25 minutes to complete. In return for participating
in this survey, you will be invited to attend a webinar for a sneak-peek on the
book’s content. You will also be given a summary report of this survey. The
survey also includes requests for feedback on the survey itself.

What Good is Threat Intelligence Going to do Against That?

As you may be aware, I’m a fan of using Star Wars for security lessons, such as threat modeling or Saltzer and Schroeder. So I was pretty excited to see Wade Baker post “Luke in the Sky with Diamonds,” talking about threat intelligence, and he gets bonus points for crossover title. And I think it’s important that we see to fixing a hole in their argument.

So…Pardon me for asking, but what good is threat intelligence going to do against that?

In many ways, the diamond that Wade’s built shows a good understanding of the incident. (It may focus overmuch on Jedi Panda, to the practical exclusion of R2-D2, who we all know is the driving force through the movies.) The facts are laid out, they’re organized using the model, and all is well.

Most of my issues boil down to two questions. The first is how could any analysis of the Battle of Yavin fail to mention the crucial role played by Obi Wan Kenobi, and second, what the heck do you do with the data? (And a third, about the Diamond Model itself — how does this model work? Why is a lightsaber a capability, and an X-Wing a bit of infrastructure? Why is The Force counted as a capability, not an adversary to the Dark Side?)

To the first question, that of General Kenobi. As everyone knows, General Kenobi had infiltrated and sabotaged the Death Star that very day. The public breach reports state that “a sophisticated actor” was only able to sabotage a tractor beam controller before being caught, but how do we know that’s all he did? He was on board the station for hours, and could easily have disabled tractor beams that worked in the trenches, or other defenses that have not been revealed. We know that his associate, Yoda, was able to see into the future. We have to assume that they used this ability, and, in using it, created for themselves a set of potential outcomes, only one of which is modeled.

The second question is, okay, we have a model of what went wrong, and what do we do with it? The Death Star has been destroyed, what does all that modeling tell us about the Jedi Panda? About the Fortressa? (Which, I’ll note, is mentioned as infrastructure, but not in the textual analysis.) How do we turn data into action?

Depending on where you stand, it appears that Wade falls into several traps in this post. They are:

  • Adversary modeling and missing something. The analysis misses Ben Kenobi, and it barely touches on the fact that the Rebel Alliance exists. Getting all personal might lead an Imperial Commander to be overly focused on Skywalker, and miss the threat from Lando Calrissian, or other actors, to a second Death Star. Another element which is missed is the relationship between Vader and Skywalker. And while I don’t want to get choked for this, there’s a real issue that the Empire doesn’t handle failure well.
  • Hindsight biases are common — so common that the military has a phenomenon it calls ‘fighting the last war.’ This analysis focuses in on a limited set of actions, the ones which succeeded, but it’s not clear that they’re the ones most worth focusing on.
  • Actionability. This is a real problem for a lot of organizations which get interesting information, but do not yet have the organizational discipline to integrate it into operations effectively.

The issues here are not new. I discussed them in “Modeling Attackers and their Motives,” and I’ll quote myself to close:

Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.

What I don’t know about the Diamond Model is how it does a better job at avoiding the traps and helping those who use it do better than other models. (I’m not saying it’s poor, I’m saying I don’t know and would like to see some empirical work on the subject.)

What Happened At OPM?

I want to discuss some elements of the OPM breach and what we know and what we don’t. Before I do, I want to acknowledge the tremendous and justified distress that those who’ve filled out the SF-86 form are experiencing. I also want to acknowledge the tremendous concern that those who employ those with clearances must be feeling. The form is designed as an (inverted) roadmap to suborning people, and now all that data is in the hands of a foreign intelligence service.

The National Journal published A Timeline of Government Data Breaches:
OPM Data Breach

I asked after the root cause, and Rich Bejtlich responded “The root cause is a focus on locking doors and windows while intruders are still in the house” with a pointer to his “Continuous Diagnostic Monitoring Does Not Detect Hackers.”

And while I agree with Richard’s point in that post, I don’t think that’s the root cause. When I think about root cause, I think about approaches like Five Whys or Ishikawa. If we apply this sort of approach then we can ask, “Why were foreigners able to download the OPM database?” There are numerous paths that we might take, for example:

  1. Because of a lack of two-factor authentication (2FA)
  2. Why? Because some critical systems at OPM don’t support 2FA.
  3. Why? Because of a lack of budget for upgrades & testing (etc)

Alternately, we might go down a variety of paths based on the Inspector General Report. We might consider Richard’s point:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because someone there knows how to lock doors and windows.
  3. Why? Because lots of organizations hire out of government agencies.
  4. Why? Because they pay better
  5. [Alternate] Employees don’t like the clearance process

But we can go down alternate paths:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because finding intruders in the house is hard, and people often miss those stealthy attackers.
  3. Why? Because networks are chaotic and change frequently
  4. [Alternate] Because not enough people publish lists of IoCs, so defenders don’t know what to look for.

What I’d really like to see are specific technical facts laid out. (Or heck, if the facts are unknowable because logs rotated or attackers deleted them, or we don’t even know why we can’t know, let’s talk about that, and learn from it.)

OPM and Katherine Archuleta have already been penalized. Let’s learn things beyond dates. Let’s put the facts out there, or, as I quoted in my last post we “should declare the causes which impel them to the separation”, or “let Facts be submitted to a candid world.” Once we have facts about the causes, we can perform deeper root cause analysis.

I don’t think that the OIG report contains those causes. Each of those audit failings might play one of several roles. The failing might have been causal, and fixing it would have stopped the attack. The failing might have been casual and the attacker would have worked around it. The failing might be irrelevant (for example, I’ve rarely seen an authorization to operate prevent an attack, unless you fold it up very small and glue it into a USB port). The failings might even have been distracting, taking attention away from other work that might have prevented the attack.

A collection of public facts would enable us to have a discussion about those possibilities. We could have a meta-conversation about those categorizations of failings, and if there’s other ones which make more sense.

Alternately, we can keep going the way we’re going.

So. What happened at OPM?

P0wned! Don’t make the same mistake I did

I fell victim to an interesting attack, which I am recounting here so that others may avoid it.

In a nutshell, I fell victim to a trojan, which the malefactor was able to place in a trusted location in my search path. A wrapper obscured the malicious payload. Additionally, a second line of defense did not catch the substitution. I believe the attackers were not out to harm me, but that this trojan was put in place partially for lulz, and partially to allow a more-important attack on the systems RBAC mechanisms to succeed.

Attack Details

I was attempting to purchase a six pack of New Belgium Rampant IPA, shown immediately below.


I obtained the six pack from the canonical location in the system – a reach-in refrigerator in the supermarket’s liquor aisle. I proceeded to the cashier, who rang up my purchase, bagged it, and accepted payment.

I realized upon arrival home, that this was a trojan six pack, as seen below:


Clearly, the attacker to care to make his payload look legitimate. What I noticed later, was the subtle difference I zoom in on below




Yes, the attacker had substituted root beer for real beer.

Needless to say, this was a devious denial of service, which the perpetrators undoubtedly laughed about. However, this was likely not just “for the lulz”. I think this was the work of juvenile attackers, whose motives were to defeat the RBAC (real beer access control) system. Knowing that a purchase of real beer would be scrutinized closely, I believe they exfiltrated the target beer by hiding it in a root beer package.

Mitigations put in place by the system did not catch this error – the cashier/reference monitor allowed the purchase (and likely, the offsetting real beer as root beer purchase).

Possible Countermeasures

The keys to this attack were that the trojan was in the right place in the search path, and that it appeared legitimate. Obviously, this location must be readable by all, since items need to be fetched from it. However, allowing items to be placed in it by untrusted users is a definite risk. Technical constraints make the obvious countermeasure — allowing only privileged stocking, while permitting “world” fetching — presents serious usability concerns, and increases system cost, since the privileged stocker must be paid.

Nonetheless, such countermeasures are in place for certain other items, notably where the cost to the system — as opposed to the user — of an illicit item substitution is quite high.

Lessons learned

Ultimately, system usability and cost tradeoffs put the onus on the end-user. Before taking a non-idempotent step, inspect the objects closely!

A Mini-Review of “The Practice of Network Security Monitoring”

NSM book coverRecently the kind folks at No Starch Press sent me a review copy of Rich Bejtlich’s newest book The Practice of Network Security Monitoring and I can’t recommend it enough. It is well worth reading from a theory perspective, but where it really shines is digging into the nuts and bolts of building an NSM program from the ground up. He has essentially built a full end to end tutorial on a broad variety of tools (especially Open Source ones) that will help with every aspect of the program, from collection to analysis to reporting.

As someone who used to own security monitoring and incident response for various organizations, the book was a great refresher on the why and wherefores of building an NSM program and it was really interesting to see how much the tools have evolved over the last 10 years or so since I was in the trenches with the bits and bytes. This is a great resource though regardless of your level of experience and will be a great reference work for years to come. Go read it…

A flame about flame

CNET ran a truly ridiculous article last week titled
“Flame can sabotage computers by deleting files, says Symantec”. And if that’s not goofy enough, the post opens with

The virus can not only steal data but disrupt computers by removing critical files, says a Symantec researcher.

ZOMG! A virus that deletes files! Now that is cutting edge technology! It’s shit articles like this that reifies the belief that the security industry in general and the AV industry in specific is filled with people who are completely out of touch with the rest of the world.

“These guys have the capability to delete everything on the computer,” Thakur said, according to Reuters. “This is not something that is theoretical. It is absolutely there.”

ProTip to Symantec and Reuters, viruses have been doing this since at least the 80s. Are you really that desperate for yet another story that this is the level that this is the sort of thing you feel is worthy of a press release and news article. How about you save that time and effort and instead focus on making a product that works better.

Shocking News of the Day: Social Security Numbers Suck

The firm’s annual Banking Identity Safety Scorecard looked at the consumer-security practices of 25 large banks and credit unions. It found that far too many still rely on customers’ Social Security numbers for authentication purposes — for instance, to verify a customer’s identity when he or she wants to speak to a bank representative over the telephone or re-set a password.

All banks in the report used some version of the Social Security number as a means of authenticating the customer, Javelin found. The pervasive use of Social Security numbers was surprising, given the importance of Social Security numbers as a tool for identity theft, said Phil Blank, managing director of security, risk and fraud at Javelin. (“Banks Rely Too Heavily On Social Security Numbers, Report Finds“, Ann Carrns, New York Times)

Previously here: “Social Security Numbers are Worthless as Authenticators” (2009), or “Bad advice on SSNs” (2005).

Emergent Chaos endorses Wim Remes for ISC(2) Board

Today, we are sticking our noses in a place about which we know fairly little: the ISC(2) elections. We’re endorsing a guy we don’t know, Wim Remes, to shake stuff up. Because, really, we ought to care about the biggest and oldest certification in security, but hey, we don’t. And really, that’s a bit of a problem. And it seems that Wim wants to make things better. And so we’re encouraging all four of our CISSP-holding readers to go vote for him, because we think that a whole lotta shaking going on would be, at worst, a not-bad thing.

How’s that for a heartfelt endorsement?

Ok, more seriously. ISC(2) offers up a certification in information security. There’s a big infosec community that doesn’t take that certification very seriously. That’s a problem that I’ve never had a motivation to try to solve, but Wim does, and I wish him the very best of luck. I think that that CISSP could do substantially better, and the first phase of that is to elect some outsiders to communicate a message that change is needed. What’s more, Wim is not a joke candidate, and he’s campaigned effectively for the role, getting lots of endorsements from people who are both worth listening to and who take this seriously enough that they wouldn’t open with a jokey lead.

And so Emergent Chaos is endorsing Wim, and hoping that some chaos and other worthwhile things start to emerge. You can read his statement on Jimmy Blake’s blog, and vote here.