Security Lessons from C-3PO

C3PO telling Han Solo the odds

C-3PO: Sir, the possibility of successfully navigating an asteroid field is approximately 3,720 to 1.

Han Solo: Never tell me the odds.

I was planning to start this with a C-3PO quote, and then move to a discussion of risk and risk taking. But I had forgotten just how rich a vein George Lucas tapped into with 3PO’s lines in The Empire Strikes Back. So I’m going to talk about his performance prior to being encouraged to find a non-front-line role with the Rebellion.

In case you need a refresher on the plot, having just about run out of options, Han Solo decides to take the known, high risk of flying into an asteroid field. When he does, 3PO interrupts to provide absolutely useless information. There’s nothing about how to mitigate the risk (except surrendering to the Empire). There’s nothing about alternatives. Then 3PO pipes up to inform people that he was previously wrong:

C-3PO: Artoo says that the chances of survival are 725 to 1. Actually Artoo has been known to make mistakes… from time to time… Oh dear…

I have to ask: How useless is that? “My first estimate was off by a factor of 5, you should trust this new one?”

C-3PO: I really don’t see how that is going to help! Surrender is a perfectly acceptable alternative in extreme circumstances! The Empire may be gracious enough to… [Han signals to Leia, who shuts 3PO down.]

Most of the time, being shut down in a meeting isn’t this extreme. But there’s a point in a discussion, especially in high-pressure situations, where the best contribution is silence. There’s a point at which talking about the wrong thing at the wrong time can cost credibility that you’d need later. And while the echo in the dialogue is for comic effect, the response certainly contains a lesson for us all:

C-3PO: The odds of successfully surviving an attack on an Imperial Star Destroyer are approximately…

Leia: Shut up!

And the eventual outcome:

C-3PO: Sir, If I may venture an opinion…

Han Solo: I’m not really interested in your opinion 3PO.

Does C-3PO resemble any CSOs you know? Any meetings you’ve been in? Successful business people are excellent at thinking about risk. Everything from launching a business to hiring employees to launching a new product line involves risk tradeoffs. The good business people either balance those risks well, or they transfer them away in some way (ideally, ethically). What they don’t want or need is some squeaky-voiced robot telling them that they don’t understand risk.

So don’t be C-3PO. Provide useful input at useful times, with useful options.

Originally appeared in Dark Reading, “Security Lessons from C-3PO, Former CSO of the Millennium Falcon,” as part of a series I’m doing there, “security lessons from..”. So far in the series, lessons from: my car mechanic, my doctor, The Gluten Lie, and my stock broker.

RSA Planning

Have a survival kit: ricola, Purell, gatorade, advil and antacids can be brought or bought on site.

Favorite talk (not by me): I look forward to Sounil Yu’s talk on “Understanding the Security Vendor Landscape Using the Cyber Defense Matrix.” I’ve seen an earlier version of this, and like the model he’s building a great deal.

Favorite talk I’m giving: “Securing the ‘Weakest Link’.”

A lot of guides, like this one, are not very comprehensive or strategic. John Masserini’s A CISO’s Guide to RSA Conference 2016 is a very solid overview if you’re new, or not getting good value from a conference.

While you’re there, keep notes for a trip report. Sending a trip report helps you remember what happened, helps your boss understand why they spent the money, and helps justify your next trip. I like trip reports that start with a summary, go directly to action items, then a a list of planned meetings and notes on them, followed by detailed and organized notes.

Also while you’re there, remember it’s infosec, and drama is common. Remember the drama triangle and how to avoid it.

Secure Code is Hard, Let’s Make it Harder!

I was confused about why Dan Kaminsky would say CVE-2015-7547 (a bug in glbc’s DNS handling) creates network attack surface for sudo. Chris Rohlf kindly sorted me out by mentioning that there’s now a -host option to sudo, of which I was unaware.

I had not looked at sudo in depth for probably 20 years, and I’m shocked to discover that it has a -e option to invoke an editor, a -p option to process format string bugs, and a -a to allow the invoker to select authentication type(?!?!)

It’s now been a fully twenty years that I’ve been professionally involved in analyzing source code. (These Security Code Review Guidelines were obviously not started in August.) We know that all code has bugs, and more code is strongly correlated with more bugs. I first saw this in the intro to the first edition of Cheswick and Bellovin. I feel a little bit like yelling you kids get off my lawn, but really, the unix philosophy of “do one thing well” was successful for a reason. The goal of sudo is to let the user go through a privilege boundary. It should be insanely simple. [Updated to add, Justin Cormack mentions that OpenBSD went from sudo to doas on this basis.]

It’s not. Not that ssh is simple either, but it isolates complexity, and helps us model attack surface more simply.

Some of the new options make sense, and support security feature sets not present previously. Some are just dumb.

As I wrote this, Dan popped up to say that it also parses /etc/hostname to help it log. Again, do one thing well. Syslog should know what host it’s on, what host it’s transmitting from, and what host its receiving from.

It’s very, very hard to make code secure. When we add in insane options to code, we make it even harder. Sometimes, other people ask us to make the code less secure, and while I’ve already said what I want to say about the FBI asking Apple to fix their mistake by writing new code, this is another example of shooting ourselves in our feet.

Please stop making it harder.

[Update: related “Not-quite-so-broken TLS: lessons in re-engineering a security protocol specification and implementation,” abstracted by the morning paper” which examines an approach to re-implementing TLS, thanks to Steve Bellovin for the pointer.]

Sneak peeks at my new startup at RSA

Confusion

Many executives have been trying to solve the problem of connecting security to the business, and we’re excited about what we’re building to serve this important and unmet need. If you present security with an image like the one above, we may be able to help.

My new startup is getting ready to show our product to friends at RSA. We’re building tools for enterprise leaders to manage their security portfolios. What does that mean? By analogy, if you talk to a financial advisor, they have tools to help you see your total financial picture: assets and debts. They’ll help you break out assets into long term (like a home) or liquid investments (like stocks and bonds) and then further contextualize each as part of your portfolio. There hasn’t been an easy way to model and manage a portfolio of control investments, and we’re building the first.

If you’re interested, we have a few slots remaining for meetings in our suite at RSA! Drop me a line at [first]@[last].org, in a comment or reach out over linkedin.

The Evolution of Secure Things

One of the most interesting security books I’ve read in a while barely mentions computers or security. The book is Petroski’s The Evolution of Useful Things.

Evolution Of useful Things Book Cover

As the subtitle explains, the book discusses “How Everyday Artifacts – From Forks and Pins to Paper Clips and Zippers – Came to be as They are.”

The chapter on the fork is a fine example of the construction of the book.. The book traces its evolution from a two-tined tool useful for holding meat as it was cut to the 4 tines we have today. Petroski documents the many variants of forks which were created, and how each was created with reference to the perceived failings of previous designs. The first designs were useful for holding meat as you cut it, before transferring it to your mouth with the knife. Later designs were unable to hold peas, extract an oyster, cut pastry, or meet a variety of other goals that diners had. Those goals acted as evolutionary pressures, and drove innovators to create new forms of the fork.

Not speaking of the fork, but rather of newer devices, Petroski writes:

Why designers do not get things right the first time may be more understandable than excusable. Whether electronics designers pay less attention to how their devices will be operated, or whether their familiarity with the electronic guts of their own little monsters hardens them against these monsters’ facial expressions, there is a consensus among consumers and reflective critics like Donald Norman, who has characterized “usable design” as the “next competitive frontier,” that things seldom live up to their promise. Norman states flatly, “Warning labels and large instruction manuals are signs of failures, attempts to patch up problems that should have been avoided by proper design in the first place.” He is correct, of course, but how is it that designers have, almost to a person, been so myopic?

So what does this have to do with security?

(No, it’s not “stick a fork in it, it’s done fer.”)

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

In security, we ask for perfection against an ill-defined and ever-growing list of hard-to-understand properties, such as “double-free safety.”

Computer security is in a process of moving from expressing “security” to expressing more precise goals, and the evolution of useful tools for finding, naming, and discussing vulnerabilities will help us express what we want in secure software.

The various manifestations of failure, as have been articulated in case studies throughout this book, provide the conceptual underpinning for understanding the evolving form of artifacts and the fabric of technology into which they are inextricably woven. It is clearly the perception of failure in existing technology that drives inventors, designers, and engineers to modify what others may find perfectly adequate, or at least usable. What constitutes failure and what improvement is not totally objective, for in the final analysis a considerable list of criteria, ranging from the functional to the aesthetic, from the economic to the moral, can come into play. Nevertheless, each criterion must be judged in a context of failure, which, though perhaps much easier than success to quantify, will always retain an aspect of subjectivity. The spectrum of subjectivity may appear to narrow to a band of objectivity within the confines of disciplinary discussion, but when a diversity of individuals and groups comes together to discuss criteria of success and failure, consensus can be an elusive state.

Even if you’ve previously read it, re-reading it from a infosec perspective is worthwhile. Highly recommended.

[As I was writing this, Ben Hughes wrote a closely related post on the practical importance of tradeoffs, “A Dockery of a Sham.”]

What Good is Threat Intelligence Going to do Against That?

As you may be aware, I’m a fan of using Star Wars for security lessons, such as threat modeling or Saltzer and Schroeder. So I was pretty excited to see Wade Baker post “Luke in the Sky with Diamonds,” talking about threat intelligence, and he gets bonus points for crossover title. And I think it’s important that we see to fixing a hole in their argument.

So…Pardon me for asking, but what good is threat intelligence going to do against that?

In many ways, the diamond that Wade’s built shows a good understanding of the incident. (It may focus overmuch on Jedi Panda, to the practical exclusion of R2-D2, who we all know is the driving force through the movies.) The facts are laid out, they’re organized using the model, and all is well.


Most of my issues boil down to two questions. The first is how could any analysis of the Battle of Yavin fail to mention the crucial role played by Obi Wan Kenobi, and second, what the heck do you do with the data? (And a third, about the Diamond Model itself — how does this model work? Why is a lightsaber a capability, and an X-Wing a bit of infrastructure? Why is The Force counted as a capability, not an adversary to the Dark Side?)

To the first question, that of General Kenobi. As everyone knows, General Kenobi had infiltrated and sabotaged the Death Star that very day. The public breach reports state that “a sophisticated actor” was only able to sabotage a tractor beam controller before being caught, but how do we know that’s all he did? He was on board the station for hours, and could easily have disabled tractor beams that worked in the trenches, or other defenses that have not been revealed. We know that his associate, Yoda, was able to see into the future. We have to assume that they used this ability, and, in using it, created for themselves a set of potential outcomes, only one of which is modeled.

The second question is, okay, we have a model of what went wrong, and what do we do with it? The Death Star has been destroyed, what does all that modeling tell us about the Jedi Panda? About the Fortressa? (Which, I’ll note, is mentioned as infrastructure, but not in the textual analysis.) How do we turn data into action?

Depending on where you stand, it appears that Wade falls into several traps in this post. They are:

  • Adversary modeling and missing something. The analysis misses Ben Kenobi, and it barely touches on the fact that the Rebel Alliance exists. Getting all personal might lead an Imperial Commander to be overly focused on Skywalker, and miss the threat from Lando Calrissian, or other actors, to a second Death Star. Another element which is missed is the relationship between Vader and Skywalker. And while I don’t want to get choked for this, there’s a real issue that the Empire doesn’t handle failure well.
  • Hindsight biases are common — so common that the military has a phenomenon it calls ‘fighting the last war.’ This analysis focuses in on a limited set of actions, the ones which succeeded, but it’s not clear that they’re the ones most worth focusing on.
  • Actionability. This is a real problem for a lot of organizations which get interesting information, but do not yet have the organizational discipline to integrate it into operations effectively.

The issues here are not new. I discussed them in “Modeling Attackers and their Motives,” and I’ll quote myself to close:

Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.

What I don’t know about the Diamond Model is how it does a better job at avoiding the traps and helping those who use it do better than other models. (I’m not saying it’s poor, I’m saying I don’t know and would like to see some empirical work on the subject.)

Adam’s new startup

A conversation with an old friend reminded me that there may be folks who follow this blog, but not the New School blog.

Over there, I’ve posted “Improving Security Effectiveness” about leaving Microsoft to work on my new company:

For the last few months, I’ve been working full time and talking with colleagues about a new way for security executives to measure the effectiveness of security programs. In very important ways, the ideas are new and non-obvious, and at the same time, they’re an evolution of the ideas that Andrew and I wrote about in the New School book that inspired this blog.

and about a job opening, “Seeking a technical leader for my new company:”

We have a new way to measure security effectiveness, and want someone who’ll drive to delivering the technology to customers, while building a great place for developers to ship and deploy important technology. We are very early in the building of the company. The right person will understand such a “green field” represents both opportunity and that we’ll have to build infrastructure as we grow.

This person might be a CTO, they might be a Chief Architect. They are certainly an experienced leader with strong references from peers, management and reports.

An Infosec lesson from the “Worst Play Call Ever”

It didn’t take long for the Seahawk’s game-losing pass to get a label.


But as Ed Felten explains, there’s actually some logic to it, and one of his commenters (Chris) points out that Marshawn Lynch scored in only one of his 5 runs from the one yard line this season. So, perhaps in a game in which the Patriots had no interceptions, it was worth the extra play before the clock ran out.

We can all see the outcome, and we judge, post-facto, the decision on that.

Worst play call ever

In security, we almost never see an outcome so closely tied to a decision. As Jay Jacobs has pointed out, we live in a wicked environment. Unfortunately, we’re quick to snap to judgement when we see a bad outcome. That makes learning harder. Also, we don’t usually get a chance to see the logic behind a play and assess it.

If only we had a way to shorten those feedback loops, then maybe we could assess what the worst play call in infosec might be.

And in fact, despite my use of snarky linkage, I don’t think we know enough to judge Sony or ChoicePoint. The decisions made by Spaltro at Sony are not unusual. We hear them all the time in security. The outcome at Sony is highly visible, but is it the norm, or is it an outlier? I don’t think we know enough to know the answer.

Hindsight is 20/20 in football. It’s easy to focus in on a single decision. But the lesson from Moneyball, and the lesson from Pete Carroll is Really, with no second thoughts or hesitation in that at all.” He has a system, and it got the Seahawks to the very final seconds of the game. And then.

One day, we’ll be able to tell management “our systems worked, and we hit really bad luck.”

[Please keep comments civil, like you always do here.]

IOS Subject Key Identifier?

I’m having a problem where the “key identifier” displayed on my ios device does not match the key fingerprint on my server. In particular, I run:


% openssl x509 -in keyfile.pem -fingerprint -sha1

and I get a 20 byte hash. I also have a 20 byte hash in my phone, but it is not that hash value. I am left wondering if this is a crypto usability fail, or an attack.

Should I expect the output of that openssl invocation to match certificate details on IOS, or is that a different hash? What options to openssl should produce the result I see on my phone?

[update: it also does not match the output or a trivial subset of the output of

% openssl x509 -in keyfile.pem -fingerprint -sha256
(or)

% openssl x509 -in keyfile.pem -fingerprint -sha512

]

[Update 2: iOS displays the “X509v3 Subject Key Identifier”, and you can ask openssl for that via -text, eg, openssl x509 -in pubkey.pem -text. Thanks to Ryan Sleevi for pointing me down that path.]

Think Like An Attacker? Flip that advice!

For many years, I have been saying that “think like an attacker” is bad advice for most people. For example:

Here’s what’s wrong with think like an attacker: most people have no clue how to do it. They don’t know what matters to an attacker. They don’t know how an attacker spends their day. They don’t know how an attacker approaches a problem. Telling people to think like an attacker isn’t prescriptive or clear.

And I’ve been challenging people to think like a professional chef to help them understand why it’s not useful advice. But now, I’ve been one-upped, and, depending on audience, I have a new line to use.

Last week, on Veracode’s blog, Pete Chestna provides the perfect flip of “think like an attacker” to re-frame problems for security people. It’s “think like a developer.” If you, oh great security guru, cannot think like a developer, for heavens sake, stop asking developers to think like attackers.