Cybersecurity Lessons from Star Wars: Blame Vader, Not the IT Department

In “The Galactic Empire Has Terrible Cybersecurity,” Alex Grigsby looks at a number of high-profile failures, covered in “A New Hope” and the rest of the Star Wars canon.

Unfortunately, the approach he takes to the Galactic Empire obscures the larger, more dangerous issue is its cybersecurity culture. There are two errors in Grigsby’s analysis, and they are worth examining. As Yoda once said, “Much to learn you still have.”

Grigsby’s first assumption is that more controls leads to better security. But controls need to be deployed judiciously to allow operations to flow. For example, when you have Stormtroopers patrolling in the Death Star, adding layers of access controls may in fact hamper operations. The Shuttle with outdated keys in Return of the Jedi shows that security issues are rampant, and officers are used to escalations. Security processes that are full of routine escalations desensitize people. They get accustomed to saying OK, and are thus unlikely to give their full attention to each escalation.

The second issue is that Grigsby focuses on a few flaws that have massive impact. The lack of encryption and problematic location of the Death Star’s exhaust port matter not so much as one-offs, but rather reveal the larger security culture at play in the Empire.

There is a singular cause for these failures: Darth Vader. His habit of force choking those who have failed him. The culture of terror that he fosters prevents those under his command from learning from their mistakes and ensures that opportunities for learning will be missed; finger-pointing and blame passing will rule the day. Complaints to the Empire’s human resources department will go unanswered and those who made the complaints probably go missing.

This is the precise opposite of the culture created by Etsy—the online marketplace for handmade and vintage items (including these Star Wars cufflinks). Etsy’s engineers engage in what they call “Blameless Post-Mortems and Just Culture,” where people feel safe coming clean about making mistakes so that they can learn from them. After a problem, engineers are encouraged to write up what happened, why it happened, what they learned, and share that knowledge widely. Executives are committed to not placing blame or finger pointing.

The Empire needs a better way to deal with its mistakes, and so do we. Fortunately, we don’t have to fear Lord Vader and can learn from things that have gone wrong.

For example, the DatalossDB, a project of the non-profit Open Security Foundation, has tracked thousands of incidents that involve the loss, theft of exposure of personally-identifiable information since 2008. The Mercatus Center has analyzed Government Accountability Office data, and found upwards of 60,000 incidents per year for the last two years. Sadly, while we know of these incidents, including what sorts of data was taken and how many victims there were, in many of them, we do not know what happened to a degree of detail that allows us to address the problem. In the first years of public breach reporting (roughly starting in 2004), there were a raft of breaches associated with stolen computers, most of them laptops. All commercial operating systems now ship with full disk encryption software as a result. But that may be the only lesson broadly learned so far.

It’s easy to focus on spectacular incidents like the destruction of a Death Star. It’s easy to look to the mythic aspects of the story. It’s harder to understand what went wrong. Was there an architect who brought up the unshielded thermal exhaust port vulnerability? What happened to the engineering change request? What can we learn from that? Did an intrusion detection analyst notice that unauthorized devices were plugged into the network? Were they overwhelmed by a rash of new devices as the new facility was staffed up?

Even given the very largest breaches, there is often a paucity of information about what went wrong. Sometimes, no one wants to know. Sometimes, it’s a set of finger-pointing. Sometimes, whatever went wrong happened long enough ago that there are no logs. The practice of “Five Whys” analysis is rare.

And when, against all odds, an organization digs in and asks what happened, the lawyers are often there to announce that under no circumstances should it be shown to anyone. After all, there will be lawsuits. (While I am not a lawyer, it seems to me that such lawsuits happen regardless of the existence or availability of a post-mortem report, and a good analysis of what went wrong might be seen as evidence of a mature, learning practice.)

What does not happen, given our fear of lawsuits and other phantom menaces, is learning from mistakes. And so R2-D2 plugs into every USB port in sight, and does so for more than twenty years.

We know from a variety of fields including aircraft safety, nuclear safety, and medical safety that high degrees of safety and security are an outcome of just culture, and willingness to discuss what’s gone wrong. Attention to “near misses” allows organizations to learn faster.

This is what the National Transportation Safety Board does when a plane crashes or a train derails.

We need to get better at post-mortems for cybersecurity. We need to publish them so we can learn the analysis methods others are developing. We need to publish them so we can assess if the conclusions are credible. We need to publish them so we can perform statistical analyses. We need to publish them so that we can do science.

There are many reasons to prevaricate. The First Order — the bad guys in The Force Awakens — can’t afford another Death Star, and we cannot afford to keep doing what we’ve been doing and hoping it will magically get better.

It’s not our only hope, but it certainly would be a new hope.

(Originally appeared on the Council on Foreign Relations Net Politics blog.)

Governance Lessons from the Death Star Architect

I had not seen this excellent presentation by the engineer who built the Death Star’s exhaust system.

In it, he discusses the need to disperse energy from a battle station with the power draw to destroy planets, and the engineering goals he had to balance.

I’m reminded again of “The Evolution of Useful Things” and how it applies to security. Security engineering involves making tradeoffs, and those tradeoffs sometimes have unfortunate results. Threat modeling is a family of techniques for thinking about the tradeoffs and what’s likely to go wrong. Doing it well means that things are less likely to go wrong, not that nothing ever will.

It’s easy, after the fact, to point out the problem with the exhaust ports. But as your risk management governance improves, you get to the point of asking “what did we know when we made these decisions?” and “could we have made these decisions better?”

At the engineering level, you want to develop a cybersecurity culture that’s open to discussing failures, not one in which you have to fear being force-choked. (More on that topic in my guest post at the Council on Foreign Relations, “Cybersecurity Lessons from Star Wars: Blame Vader, Not the IT Department.”)


More broadly, organizational leadership needs to focus on questions about appropriate policy and governance being in place. That sounds jargony, so let me unpack it a little. Policy is what you intend to do: such as perform risk analysis that lets executives make good risk management decisions about the competing aspects of the business. Is a PHP vuln acceptable? If it happened to be in the Force Awakened site this week, taking that site down would have been an expensive choice. It’s tempting to ask what geek would do more than add a comment? And that gets into questions of attacker motivation, and it’s easy to get it wrong. Even Star Wars has critics (one minute video, worth sharing for the reveal at the end):

If policy is about knowing what you intend to do in a way that lets people do it, governance is about making sure it happens properly. There are all sorts of reasons that it’s hard to map technology risk to business risk. Tech risk involves the bad things which might happen, and the interesting ways technologies are tightly woven make it hard to say, a priori, that an exhaust port technical issue might have a bad business impact, or that an HVAC system having a bad password might lead to a bad business impact.

Exhaust is likely to generate turbulence in an exhaust shaft, and that such turbulence will act as a compensating control for a lack of port shielding. That is, whatever substrate carries heat will do so unevenly, and in a shaft the size of a womp rat, that turbulence will batter any projectile into exploding somewhere less harmful.

A good policy will ask for such analysis, a good governance process will ask if it happened, and, after a failure, if the failure is likely to happen again. We need to help executives form the questions, and we need to do a better job at supplying answers.

Open Letters to Security Vendors

John Masserini has a set of “open letters to security vendors” on Security Current.

Everyone involved in product or sales at a security startup should read them. John provides insight into what it’s like to be pitched by too many startups, and provides a level of transparency that’s sadly hard to find. Personally, I learned a great deal about what happens when you’re pitched while I was at a large company, and I can vouch for the realities he puts forth. The sooner you understand those realities and incorporate them into your thinking, the more successful we’ll all be.

After meeting with dozens of startups at Black Hat a few weeks ago, I’ve realized that the vast majority of the leaders of these new companies struggle to articulate the value their solutions bring to the enterprise.

Why does John’s advice make us all more successful? Because each organization that follows it moves towards a more efficient state, for themselves and for the folks who they’re pitching.

Getting more efficient means you waste less time per prospect. When you focus on qualified leads who care about the problem you’re working on, you get more sales per unit of time. What’s more, by not wasting the time of those who won’t buy, you free up their time for talking to those who might have something to provide them. (One banker I know said “I could hire someone full-time to reject startup pitches.” Think about what that means for your sales cycle for a moment.)

Go read “An Open Letter to Security Vendors” along with part 2 (why sales takes longer) and part 3 (the technology challenges most startups ignore).