Security Lessons from C-3PO

C3PO telling Han Solo the odds

C-3PO: Sir, the possibility of successfully navigating an asteroid field is approximately 3,720 to 1.

Han Solo: Never tell me the odds.

I was planning to start this with a C-3PO quote, and then move to a discussion of risk and risk taking. But I had forgotten just how rich a vein George Lucas tapped into with 3PO’s lines in The Empire Strikes Back. So I’m going to talk about his performance prior to being encouraged to find a non-front-line role with the Rebellion.

In case you need a refresher on the plot, having just about run out of options, Han Solo decides to take the known, high risk of flying into an asteroid field. When he does, 3PO interrupts to provide absolutely useless information. There’s nothing about how to mitigate the risk (except surrendering to the Empire). There’s nothing about alternatives. Then 3PO pipes up to inform people that he was previously wrong:

C-3PO: Artoo says that the chances of survival are 725 to 1. Actually Artoo has been known to make mistakes… from time to time… Oh dear…

I have to ask: How useless is that? “My first estimate was off by a factor of 5, you should trust this new one?”

C-3PO: I really don’t see how that is going to help! Surrender is a perfectly acceptable alternative in extreme circumstances! The Empire may be gracious enough to… [Han signals to Leia, who shuts 3PO down.]

Most of the time, being shut down in a meeting isn’t this extreme. But there’s a point in a discussion, especially in high-pressure situations, where the best contribution is silence. There’s a point at which talking about the wrong thing at the wrong time can cost credibility that you’d need later. And while the echo in the dialogue is for comic effect, the response certainly contains a lesson for us all:

C-3PO: The odds of successfully surviving an attack on an Imperial Star Destroyer are approximately…

Leia: Shut up!

And the eventual outcome:

C-3PO: Sir, If I may venture an opinion…

Han Solo: I’m not really interested in your opinion 3PO.

Does C-3PO resemble any CSOs you know? Any meetings you’ve been in? Successful business people are excellent at thinking about risk. Everything from launching a business to hiring employees to launching a new product line involves risk tradeoffs. The good business people either balance those risks well, or they transfer them away in some way (ideally, ethically). What they don’t want or need is some squeaky-voiced robot telling them that they don’t understand risk.

So don’t be C-3PO. Provide useful input at useful times, with useful options.

Originally appeared in Dark Reading, “Security Lessons from C-3PO, Former CSO of the Millennium Falcon,” as part of a series I’m doing there, “security lessons from..”. So far in the series, lessons from: my car mechanic, my doctor, The Gluten Lie, and my stock broker.

Cybersecurity Lessons from Star Wars: Blame Vader, Not the IT Department

In “The Galactic Empire Has Terrible Cybersecurity,” Alex Grigsby looks at a number of high-profile failures, covered in “A New Hope” and the rest of the Star Wars canon.

Unfortunately, the approach he takes to the Galactic Empire obscures the larger, more dangerous issue is its cybersecurity culture. There are two errors in Grigsby’s analysis, and they are worth examining. As Yoda once said, “Much to learn you still have.”

Grigsby’s first assumption is that more controls leads to better security. But controls need to be deployed judiciously to allow operations to flow. For example, when you have Stormtroopers patrolling in the Death Star, adding layers of access controls may in fact hamper operations. The Shuttle with outdated keys in Return of the Jedi shows that security issues are rampant, and officers are used to escalations. Security processes that are full of routine escalations desensitize people. They get accustomed to saying OK, and are thus unlikely to give their full attention to each escalation.

The second issue is that Grigsby focuses on a few flaws that have massive impact. The lack of encryption and problematic location of the Death Star’s exhaust port matter not so much as one-offs, but rather reveal the larger security culture at play in the Empire.

There is a singular cause for these failures: Darth Vader. His habit of force choking those who have failed him. The culture of terror that he fosters prevents those under his command from learning from their mistakes and ensures that opportunities for learning will be missed; finger-pointing and blame passing will rule the day. Complaints to the Empire’s human resources department will go unanswered and those who made the complaints probably go missing.

This is the precise opposite of the culture created by Etsy—the online marketplace for handmade and vintage items (including these Star Wars cufflinks). Etsy’s engineers engage in what they call “Blameless Post-Mortems and Just Culture,” where people feel safe coming clean about making mistakes so that they can learn from them. After a problem, engineers are encouraged to write up what happened, why it happened, what they learned, and share that knowledge widely. Executives are committed to not placing blame or finger pointing.

The Empire needs a better way to deal with its mistakes, and so do we. Fortunately, we don’t have to fear Lord Vader and can learn from things that have gone wrong.

For example, the DatalossDB, a project of the non-profit Open Security Foundation, has tracked thousands of incidents that involve the loss, theft of exposure of personally-identifiable information since 2008. The Mercatus Center has analyzed Government Accountability Office data, and found upwards of 60,000 incidents per year for the last two years. Sadly, while we know of these incidents, including what sorts of data was taken and how many victims there were, in many of them, we do not know what happened to a degree of detail that allows us to address the problem. In the first years of public breach reporting (roughly starting in 2004), there were a raft of breaches associated with stolen computers, most of them laptops. All commercial operating systems now ship with full disk encryption software as a result. But that may be the only lesson broadly learned so far.

It’s easy to focus on spectacular incidents like the destruction of a Death Star. It’s easy to look to the mythic aspects of the story. It’s harder to understand what went wrong. Was there an architect who brought up the unshielded thermal exhaust port vulnerability? What happened to the engineering change request? What can we learn from that? Did an intrusion detection analyst notice that unauthorized devices were plugged into the network? Were they overwhelmed by a rash of new devices as the new facility was staffed up?

Even given the very largest breaches, there is often a paucity of information about what went wrong. Sometimes, no one wants to know. Sometimes, it’s a set of finger-pointing. Sometimes, whatever went wrong happened long enough ago that there are no logs. The practice of “Five Whys” analysis is rare.

And when, against all odds, an organization digs in and asks what happened, the lawyers are often there to announce that under no circumstances should it be shown to anyone. After all, there will be lawsuits. (While I am not a lawyer, it seems to me that such lawsuits happen regardless of the existence or availability of a post-mortem report, and a good analysis of what went wrong might be seen as evidence of a mature, learning practice.)

What does not happen, given our fear of lawsuits and other phantom menaces, is learning from mistakes. And so R2-D2 plugs into every USB port in sight, and does so for more than twenty years.

We know from a variety of fields including aircraft safety, nuclear safety, and medical safety that high degrees of safety and security are an outcome of just culture, and willingness to discuss what’s gone wrong. Attention to “near misses” allows organizations to learn faster.

This is what the National Transportation Safety Board does when a plane crashes or a train derails.

We need to get better at post-mortems for cybersecurity. We need to publish them so we can learn the analysis methods others are developing. We need to publish them so we can assess if the conclusions are credible. We need to publish them so we can perform statistical analyses. We need to publish them so that we can do science.

There are many reasons to prevaricate. The First Order — the bad guys in The Force Awakens — can’t afford another Death Star, and we cannot afford to keep doing what we’ve been doing and hoping it will magically get better.

It’s not our only hope, but it certainly would be a new hope.

(Originally appeared on the Council on Foreign Relations Net Politics blog.)

Governance Lessons from the Death Star Architect

I had not seen this excellent presentation by the engineer who built the Death Star’s exhaust system.

In it, he discusses the need to disperse energy from a battle station with the power draw to destroy planets, and the engineering goals he had to balance.

I’m reminded again of “The Evolution of Useful Things” and how it applies to security. Security engineering involves making tradeoffs, and those tradeoffs sometimes have unfortunate results. Threat modeling is a family of techniques for thinking about the tradeoffs and what’s likely to go wrong. Doing it well means that things are less likely to go wrong, not that nothing ever will.

It’s easy, after the fact, to point out the problem with the exhaust ports. But as your risk management governance improves, you get to the point of asking “what did we know when we made these decisions?” and “could we have made these decisions better?”

At the engineering level, you want to develop a cybersecurity culture that’s open to discussing failures, not one in which you have to fear being force-choked. (More on that topic in my guest post at the Council on Foreign Relations, “Cybersecurity Lessons from Star Wars: Blame Vader, Not the IT Department.”)

More broadly, organizational leadership needs to focus on questions about appropriate policy and governance being in place. That sounds jargony, so let me unpack it a little. Policy is what you intend to do: such as perform risk analysis that lets executives make good risk management decisions about the competing aspects of the business. Is a PHP vuln acceptable? If it happened to be in the Force Awakened site this week, taking that site down would have been an expensive choice. It’s tempting to ask what geek would do more than add a comment? And that gets into questions of attacker motivation, and it’s easy to get it wrong. Even Star Wars has critics (one minute video, worth sharing for the reveal at the end):

If policy is about knowing what you intend to do in a way that lets people do it, governance is about making sure it happens properly. There are all sorts of reasons that it’s hard to map technology risk to business risk. Tech risk involves the bad things which might happen, and the interesting ways technologies are tightly woven make it hard to say, a priori, that an exhaust port technical issue might have a bad business impact, or that an HVAC system having a bad password might lead to a bad business impact.

Exhaust is likely to generate turbulence in an exhaust shaft, and that such turbulence will act as a compensating control for a lack of port shielding. That is, whatever substrate carries heat will do so unevenly, and in a shaft the size of a womp rat, that turbulence will batter any projectile into exploding somewhere less harmful.

A good policy will ask for such analysis, a good governance process will ask if it happened, and, after a failure, if the failure is likely to happen again. We need to help executives form the questions, and we need to do a better job at supplying answers.

What Good is Threat Intelligence Going to do Against That?

As you may be aware, I’m a fan of using Star Wars for security lessons, such as threat modeling or Saltzer and Schroeder. So I was pretty excited to see Wade Baker post “Luke in the Sky with Diamonds,” talking about threat intelligence, and he gets bonus points for crossover title. And I think it’s important that we see to fixing a hole in their argument.

So…Pardon me for asking, but what good is threat intelligence going to do against that?

In many ways, the diamond that Wade’s built shows a good understanding of the incident. (It may focus overmuch on Jedi Panda, to the practical exclusion of R2-D2, who we all know is the driving force through the movies.) The facts are laid out, they’re organized using the model, and all is well.

Most of my issues boil down to two questions. The first is how could any analysis of the Battle of Yavin fail to mention the crucial role played by Obi Wan Kenobi, and second, what the heck do you do with the data? (And a third, about the Diamond Model itself — how does this model work? Why is a lightsaber a capability, and an X-Wing a bit of infrastructure? Why is The Force counted as a capability, not an adversary to the Dark Side?)

To the first question, that of General Kenobi. As everyone knows, General Kenobi had infiltrated and sabotaged the Death Star that very day. The public breach reports state that “a sophisticated actor” was only able to sabotage a tractor beam controller before being caught, but how do we know that’s all he did? He was on board the station for hours, and could easily have disabled tractor beams that worked in the trenches, or other defenses that have not been revealed. We know that his associate, Yoda, was able to see into the future. We have to assume that they used this ability, and, in using it, created for themselves a set of potential outcomes, only one of which is modeled.

The second question is, okay, we have a model of what went wrong, and what do we do with it? The Death Star has been destroyed, what does all that modeling tell us about the Jedi Panda? About the Fortressa? (Which, I’ll note, is mentioned as infrastructure, but not in the textual analysis.) How do we turn data into action?

Depending on where you stand, it appears that Wade falls into several traps in this post. They are:

  • Adversary modeling and missing something. The analysis misses Ben Kenobi, and it barely touches on the fact that the Rebel Alliance exists. Getting all personal might lead an Imperial Commander to be overly focused on Skywalker, and miss the threat from Lando Calrissian, or other actors, to a second Death Star. Another element which is missed is the relationship between Vader and Skywalker. And while I don’t want to get choked for this, there’s a real issue that the Empire doesn’t handle failure well.
  • Hindsight biases are common — so common that the military has a phenomenon it calls ‘fighting the last war.’ This analysis focuses in on a limited set of actions, the ones which succeeded, but it’s not clear that they’re the ones most worth focusing on.
  • Actionability. This is a real problem for a lot of organizations which get interesting information, but do not yet have the organizational discipline to integrate it into operations effectively.

The issues here are not new. I discussed them in “Modeling Attackers and their Motives,” and I’ll quote myself to close:

Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.

What I don’t know about the Diamond Model is how it does a better job at avoiding the traps and helping those who use it do better than other models. (I’m not saying it’s poor, I’m saying I don’t know and would like to see some empirical work on the subject.)

Seattle event: Ada’s Books

Shostack threat modeling Adas

For Star Wars day, I’m happy to share this event poster for my talk at Ada’s Books in Seattle
Technical Presentation: Adam Shostack shares Threat Modeling Lessons with Star Wars.

This will be a less technical talk with plenty of discussion and interactivity, drawing on some of the content from “Security Lessons from Star Wars,” adapted for a more general audience.

Why the Star Wars Prequels Sucked

It is a truism that the Star Wars prequels sucked. (Elsewhere, I’ve commented that the franchise being sold to Disney means someone can finally tell the tragic story of Anakin Skywalker’s seduction by the dark side.)

But the issue of exactly why they sucked is complex and layered, and most of us prefer not to consider it too deeply. Fortunately, you no longer have to. You can simply get “Why the Star Wars Prequels Sucked, and Why It Matters,” a short “Polemic on Aesthetics, Ethics and Politics. With Lightsabers.”

Really, what else do you need to know?

An example? Ok, the diner scene, and how it compares to the cantina scene. The cantina exudes otherness and menace. The diner looks like it was filmed in 1950s and then had a few weird things ‘shopped in. The scene undercuts the world which Star Wars established. Or the casual tossing in that Anakin was a virgin birth, and how after tying to one of the most enduring stories in western culture, the subject is then never referred to again.

Or the utter lack of consequence of anything in the stories, since we already know how they’ll come out, and how, by focusing on characters whose fates we know, Lucas drains any dramatic tension of of the story. The list goes on and on, and if you want to know why you hated the prequels so much, this is a short and easy read, and highly worthwhile.

Oh, and you’ll learn how Lando Calrissian is Faust. So go buy it already.

One last thing. Delano Lopez? That’s a name I hadn’t heard in a very long time. But he and I went to school together.

Systems Not Sith: Organizational Lessons From Star Wars

In Star Wars, the Empire is presented as a monolith. Storm Troopers, TIE Fighters and even Star Destroyers are supposedly just indistinguishable cogs in a massive military machine, single-mindedly pursuing a common goal. This is, of course, a façade – like all humans, the soldiers and Officers of the Imperial Military will each have their own interests and loyalties. The Army is going to compete with the Navy, the Fighter jocks are going to compete with the Star Destroyer Captains, and the AT-AT crews are going to compete with Storm Troopers.

Read the whole thing at “Overthinking It”: “Systems, Not Sith: How Inter-service Rivalries Doomed the Galactic Empire“. And if you missed it, my take on security lessons from Star Wars.

Thanks to Bruce for the pointer.

My AusCert Gala talk

At AusCert, I had the privilege to share a the gala dinner stage with LaserMan and Axis of Awesome, and talk about a few security lessons from Star Wars.

I forgot to mention onstage that I’ve actually illustrated all eight of the Saltzer and Schroeder principles, and collected them up as a single page. That is “The Security Principles of Salzter and Schroeder, Illustrated with Scenes from Star Wars“. Enjoy!

Saturn’s Moon Enceladus

NASA claims that:

At least four distinct plumes of water ice spew out from the south polar region of Saturn’s moon Enceladus in this dramatically illuminated image.

Light reflected off Saturn is illuminating the surface of the moon while the sun, almost directly behind Enceladus, is backlighting the plumes. See Bursting at the Seams to learn more about Enceladus and its plumes.


But they can’t fool me. That’s no moon, that’s a battle station.