The Evolution of Apple’s Differential Privacy

Bruce Schneier comments on “Apple’s Differential Privacy:”

So while I applaud Apple for trying to improve privacy within its business models, I would like some more transparency and some more public scrutiny.

Do we know enough about what’s being done? No, and my bet is that Apple doesn’t know precisely what they’ll ship, and aren’t answering deep technical questions so that they don’t mis-speak. I know that when I was at Microsoft, details like that got adjusted as we learned from a bigger pile of real data from real customer use informed things. I saw some really interesting shifts surprisingly late in the dev cycle of various products.

I also want to challenge the way Matthew Green closes: “If Apple is going to collect significant amounts of new data from the devices that we depend on so much, we should really make sure they’re doing it right — rather than cheering them for Using Such Cool Ideas.”

But that is a false dichotomy, and would be silly even if it were not. It’s silly because we can’t be sure if they’re doing it right until after they ship it, and we can see the details. (And perhaps not even then.)

But even more important, the dichotomy is not “are they going to collect substantial data or not?” They are. The value organizations get from being able to observe their users is enormous. As product managers observe what A/B testing in their web properties means to the speed of product improvement, they want to bring that same ability to other platforms. Those that learn fastest will win, for the same reasons that first to market used to win.

Next, are they going to get it right on the first try? No. Almost guaranteed. Software, as we learned a long time ago, has bugs. As I discussed in “The Evolution of Secure Things:”

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

Green (and Schneier) are right to be skeptical, and may even be right to be cynical. We should not lose sight of the fact that Apple is spending rare privacy engineering resources to do better than Microsoft. Near as I can tell, this is an impressive delivery on the commitment to be the company that respects your privacy, and I say that believing that there will be both bugs and design flaws in the implementation. Green has an impressive record of finding and calling Apple (and others) on such, and I’m optimistic he’ll have happy hunting.

In the meantime, we can, and should, cheer Apple for trying.

Security Lessons from C-3PO

C3PO telling Han Solo the odds

C-3PO: Sir, the possibility of successfully navigating an asteroid field is approximately 3,720 to 1.

Han Solo: Never tell me the odds.

I was planning to start this with a C-3PO quote, and then move to a discussion of risk and risk taking. But I had forgotten just how rich a vein George Lucas tapped into with 3PO’s lines in The Empire Strikes Back. So I’m going to talk about his performance prior to being encouraged to find a non-front-line role with the Rebellion.

In case you need a refresher on the plot, having just about run out of options, Han Solo decides to take the known, high risk of flying into an asteroid field. When he does, 3PO interrupts to provide absolutely useless information. There’s nothing about how to mitigate the risk (except surrendering to the Empire). There’s nothing about alternatives. Then 3PO pipes up to inform people that he was previously wrong:

C-3PO: Artoo says that the chances of survival are 725 to 1. Actually Artoo has been known to make mistakes… from time to time… Oh dear…

I have to ask: How useless is that? “My first estimate was off by a factor of 5, you should trust this new one?”

C-3PO: I really don’t see how that is going to help! Surrender is a perfectly acceptable alternative in extreme circumstances! The Empire may be gracious enough to… [Han signals to Leia, who shuts 3PO down.]

Most of the time, being shut down in a meeting isn’t this extreme. But there’s a point in a discussion, especially in high-pressure situations, where the best contribution is silence. There’s a point at which talking about the wrong thing at the wrong time can cost credibility that you’d need later. And while the echo in the dialogue is for comic effect, the response certainly contains a lesson for us all:

C-3PO: The odds of successfully surviving an attack on an Imperial Star Destroyer are approximately…

Leia: Shut up!

And the eventual outcome:

C-3PO: Sir, If I may venture an opinion…

Han Solo: I’m not really interested in your opinion 3PO.

Does C-3PO resemble any CSOs you know? Any meetings you’ve been in? Successful business people are excellent at thinking about risk. Everything from launching a business to hiring employees to launching a new product line involves risk tradeoffs. The good business people either balance those risks well, or they transfer them away in some way (ideally, ethically). What they don’t want or need is some squeaky-voiced robot telling them that they don’t understand risk.

So don’t be C-3PO. Provide useful input at useful times, with useful options.

Originally appeared in Dark Reading, “Security Lessons from C-3PO, Former CSO of the Millennium Falcon,” as part of a series I’m doing there, “security lessons from..”. So far in the series, lessons from: my car mechanic, my doctor, The Gluten Lie, and my stock broker.

The Rhetorical Style of Drama

There is a spectre haunting the internet, the spectre of drama. All the powers of the social media have banded together to not fight it, because drama increases engagement statistics like nothing else: Twitter and Facebook, Gawker and TMZ, BlackLivesMatter and GamerGate, Donald Trump and Donald Trump, the list goes on and on.

Where is the party that says we shall not sink to the crazy? Where is the party which argues for civil discourse? The clear, unarguable result is that drama is universally acknowledged to be an amazingly powerful tactic for getting people engaged on your important issue, the exquisite pain which so long you have suffered in silence, and are now compelled to speak out upon! But, reluctantly, draw back, draw breath, draw deep upon your courage before unleashing, you don’t want to, but musts be musts, and so unleashing the hounds of drama, sadly, reluctantly, but…

In this post, I’m going to stop aping the Communist Manifesto, stop aping drama lovers, and discuss some of the elements I see which make up a rhetorical “style guide” for dramatists. I hope that in so doing, I can help build an “immune system” against drama and a checklist for writing well about emotionally-laden issues, rather than a guidebook for creating more. And so I’m going to call out elements and discuss how to avoid them. Drama often includes logical falacies (see also the “informal list” on Wikipedia.) However, drama is not conditioned on such, and one can make an illogical argument without being dramatic. Drama is about the emotional perception of victim, persecutor and rescuer, and how we move from one state to another… “I was only trying to help! Why are you attacking me?!?” (More on that both later in this article, and here.)

Feedback is welcome, especially on elements of the style that I’m missing. I’m going to use a few articles I’ve seen recently, including “Search and Destroy: The Knowledge Engine and the Undoing of Lila Tretikov.” I’ll also use the recent post by Nadeem Kobeissi “A Cry for Help Against Thomas Ptacek, Serial Abuser,” and “What Happened At The Satoshi Roundtable.”

Which brings me to my next point: drama, in and of itself, is not evidence for or against the underlying claims. I have no opinion on the underlying claims of either article. I am simply commenting on their rhetorical style as having certain characteristics which I’ve noticed in drama. Maybe there is a crisis at the Wikimedia Foundation. Maybe Mr. Ptacek really is unfairly mean to Mr. Kobeissi. I’ve met Nadeem once or twice, he seems like a nice fellow, and I’ve talked with Thomas on and off over more than twenty years, but not worked closely with him. Similarly, retweets, outraged follow-on blogs, and the like do not make a set of facts.

Anyway, on to the rhetorical style of drama:

  • Big, bold claims which are not justified. Go read the opening paragraphs of the Wikimedia article, and look for evidence. To avoid this, consider the 5 paragraph essay: a summary, paragraphs focused on topics, and a conclusion.
  • The missing link. The Wikimedia piece has a number of places where links could easily bolster the argument. For example, “within just the past 48 hours, employees have begun speaking openly on the web” cries out for two or more links. (It’s really tempting to say “Citation needed” here, but I won’t, see the point on baiting, below.) Similarly, Mr. Kobeissi writes that Ptacek is a “obsessive abuser, a bully, a slanderer and an employer of public verbal sexual degradation that he defends, backs down on and does not apologize for.” To avoid this, link appropriately to original sources so people can judge your claims.
  • Mixing fact, opinion and impact. If you want to cause drama, present your opinion on the impact of some other party’s actions as a fact. If you want to avoid drama, consider the non-violent communication patterns, such as “when I hear you say X, my response is Y.” For reasons too complex to go into here, this helps break the drama triangle. (I’ll touch more on that below).
  • Length. Like this post, drama is often lengthy, and unlike this post, often beautifully written, recursively (or perhaps just repetitively) looping back over the same point, as if volume is correlated with truth. The Wikimedia article seems to go on and on, and perhaps there’s some more detail, causing you to want to keep reading.
  • Behaviors that don’t make sense If Johnny had gone straight to the police, none of this would ever had happened. If Mr. Kobeissi had contacted Usenix, they could have had Mr. Ptacek recuse himself from the paper based on evidence of two years of conflict. Mr. Kobeissi doesn’t say why this never happened. Oh, and be prepared to have your story judged.
  • Baiting and demands. After presenting a litany of wrongs, there’s a set of demands presented, often very specific ones. Much better to ask “Would you like to resolve this? If so, do you have ideas on how?” Also, “if you care about this, it must be your top priority.”
  • False dichotomies. After the facts and opinions, or perhaps mixed in with them, there’s an either/or presented. “This must be true, or he would have sued for libel.” (Perhaps someone doesn’t want to spend tens or hundreds of thousands of dollars on lawyers? Perhaps someone has heard of the Streisand effect? The President doesn’t sue everyone who claims he’s a crypto-Muslim.)
  • Unstated assumptions For example, while much of Mr. Kobeissi’s post focuses on last year’s Usenix, that was last year. There’s an unstated assumption that once someone has been on a PC for you, they can’t say mean things about you. And while it would be unprofessional to do so while you’re chairing a conference, how long does that zone extend? We don’t know when Mr. Ptacek was last mean to Mr. Kobeissi. Perhaps he waited a year after being program chair. Mr. Kobeissi probably knows, and he has not told us.
  • Failure to assume goodwill, or a mutuality of failure, or that there’s another side to the story. This is the dramatists curse, the inability to conceive or concede that the other person may have a side. Perhaps, once, Mr. Kobeissi was young, immature, and offended Mr. Ptacek in a way which is hard to “put behind us.” We all have such people in our lives. An innocent act or comment is taken the wrong way, irrecoverably.
  • With us or against us. It’s a longstanding tool of demagogues to paint the world in black and white. There’s often important shades of grey. To avoid drama, talk about them.
  • I’m being soooo reasonable here!. Much like a car salesperson telling you that you can trust them, the dramatic spend a (often a great many words) explaining how reasonable they’re being. If you’re being reasonable, show, don’t tell.

Not all drama will have all of these elements, and it may be that things with all of these elements will not be drama. You should assume goodwill on the part of the people whose words you are reading. Oftentimes, drama is accidental, where someone says something which leaves the other party feeling attacked, a rescuer comes in, and around and around the drama triangle we go.

As I wrote in that article on the drama triangle:

One of the nifty things about this triangle — and one of the things missing from most popular discussion of it — is how the participants put different labels on the roles they are playing.

For example, a vulnerability researcher may perceive themselves as a rescuer, offering valuable advice to a victim of poor coding practice. Meanwhile, the company sees the researcher as a persecutor, making unreasonable demands of their victim-like self. In their response, the company calls their lawyers and becomes a persecutor, and simultaneously allows the rescuer to shift to the role of victim.

A failure to respond to drama does not make the dramatist right. Sometimes the best move is to walk away, even when the claims are demonstrably false, even when they are hurtful. The internet can be a wretched hive of scum and drama, and it’s hard to stay clean when wrestling a pig.

Understanding the rhetorical style of drama so that you don’t get swept up in it can reduce the impact of drama on others. Which is not to say that the issues for which drama is generated do not deserve attention. But perhaps attention and urgency can be generated in a space of civilized discourse. (I’m grateful to Elissa Shevinsky for having used that phrase recently, it seems to have been far from many minds.)

“Think Like an Attacker” is an opt-in mistake

I’ve repeatedly spoken out against “think like an attacker.”

Now I’m going to argue from authority. In this long article, “The Obama Doctrine,” the President of the United States says “The degree of tribal division in Libya was greater than our analysts had expected.”

So let’s think about that statement and what it means. First, it means that the multi-billion dollar analytic apparatus of the United States made a mistake, a serious one about which the President cares, because it impacted his foreign policy. Second, that mistake was about how people think. Third, that group of people was a society, and one that has interacted with the United States since, oh, I don’t know, someone wrote words like “From the halls of Montezuma to the shores of Tripoli.” (And dig the Marines, kickin’ it old skool with that video.) Fourth, it was not a group that attempts to practice operational security in any way.

So if we consider that the analytical capability of the US can get that wrong, do you really want to try to think like Anonymous, think like 61398, like 8200? Are you going to do this perfectly, or are there chances to make mistakes? Alternately, do you want to require everyone who threat models to know how attackers think? Understanding how other people think and prioritize requires a great deal of work. There are entire fields, like anthropology and sociology dedicated to doing it well. Should we start our defense by reading books on the motivational structures of the PLA or the IDF?

The simple fact is, you don’t need to. You can start from what people are building or deploying. (I wrote a book on how.) The second simple fact is repeating that phrase upsets people. When I first joined Microsoft, I used that phrase. One day, a developer grabbed me after a meeting, and politely told me that he didn’t understand it. Oh, wait, this was Microsoft in 2006. He told me I was a fucking idiot and I should give useful advice. After a bit more conversation, he also told me that he had no idea how the fuck an attacker thought, and if I thought he had time to read a book to learn about it, I could write the goddamned features customers pay for while he read.

Every time someone tells me to think like an attacker, I think about that conversation. I appreciate the honesty that the fellow showed, if not his manner. But (as Dave Weinstein pointed out) “A generalized form of this would be ‘Stop giving developers completely un-actionable “guidance”.’ Now, Dave and I worked together at Microsoft, so maybe there’s a similar experience in his past.

Now, this does not mean that we don’t need to pay attention to what real attackers do. It means that we don’t need to walk a mile in their shoes to defend effectively against it.

Previously, “Think Like An Attacker?,” “The Discipline of “think like an attacker”,” and “Think Like An Attacker? Flip that advice!.” [Edited, also previously, at the New School blog: “Modeling Attackers and Their Motives.”]

Humans in Security, BlackHat talks

This is a brief response to Steve Christey Coley, who wrote on Twitter, “but BH CFP reads mostly pure-tech, yet infosec’s more human-driven?” I can’t respond in 140, and so a few of my thoughts, badly organized:

  • BlackHat started life as a technical conference, and there’s certain expectations about topics, content and quality, which have changed and evolved over time.
  • The best talk in the world, delivered to the wrong audience, is not the best talk in the world. For example, there’s lots of interesting stuff happening with CRISPR. We probably wouldn’t even accept a talk on the security implications. Similarly, we probably wouldn’t take a talk on mosquito-zapping lasers, as much fun as it would be.
  • I and other members of the PC, work to change those expectations by getting good content that is at the edge of those expectations. Thus, there’s a human factors track again this year.
  • That track gets a lot of “buy a UPS uniform on ebay” submissions, and the audience doesn’t tend to like those. They’re not cutting edge.
  • I would love it if we got more SOUPS-like content, redone a little to meet audience expectations for a Blackhat talk, which are different than expectations for an academic talk.
  • So what I look for is something new, in a form that I believe will be close enough to the expectations of the audience that we drive and evolve change in useful directions.
  • Finding the right balance is hard.

So, what do you think a good BlackHat talk on human factors talk might be?

(I should be clear: I am one of many reviewers for BlackHat, and I do not speak for them, or any other reviewer. I cannot discuss specific submissions or the discussions we have around them.)

Update: Since this was written quickly, I forgot to link to “How to Get Accepted at Blackhat.” Read every word of that, ask yourself if your submission is a good one.

RSA Planning

Have a survival kit: ricola, Purell, gatorade, advil and antacids can be brought or bought on site.

Favorite talk (not by me): I look forward to Sounil Yu’s talk on “Understanding the Security Vendor Landscape Using the Cyber Defense Matrix.” I’ve seen an earlier version of this, and like the model he’s building a great deal.

Favorite talk I’m giving: “Securing the ‘Weakest Link’.”

A lot of guides, like this one, are not very comprehensive or strategic. John Masserini’s A CISO’s Guide to RSA Conference 2016 is a very solid overview if you’re new, or not getting good value from a conference.

While you’re there, keep notes for a trip report. Sending a trip report helps you remember what happened, helps your boss understand why they spent the money, and helps justify your next trip. I like trip reports that start with a summary, go directly to action items, then a a list of planned meetings and notes on them, followed by detailed and organized notes.

Also while you’re there, remember it’s infosec, and drama is common. Remember the drama triangle and how to avoid it.

Secure Code is Hard, Let’s Make it Harder!

I was confused about why Dan Kaminsky would say CVE-2015-7547 (a bug in glbc’s DNS handling) creates network attack surface for sudo. Chris Rohlf kindly sorted me out by mentioning that there’s now a -host option to sudo, of which I was unaware.

I had not looked at sudo in depth for probably 20 years, and I’m shocked to discover that it has a -e option to invoke an editor, a -p option to process format string bugs, and a -a to allow the invoker to select authentication type(?!?!)

It’s now been a fully twenty years that I’ve been professionally involved in analyzing source code. (These Security Code Review Guidelines were obviously not started in August.) We know that all code has bugs, and more code is strongly correlated with more bugs. I first saw this in the intro to the first edition of Cheswick and Bellovin. I feel a little bit like yelling you kids get off my lawn, but really, the unix philosophy of “do one thing well” was successful for a reason. The goal of sudo is to let the user go through a privilege boundary. It should be insanely simple. [Updated to add, Justin Cormack mentions that OpenBSD went from sudo to doas on this basis.]

It’s not. Not that ssh is simple either, but it isolates complexity, and helps us model attack surface more simply.

Some of the new options make sense, and support security feature sets not present previously. Some are just dumb.

As I wrote this, Dan popped up to say that it also parses /etc/hostname to help it log. Again, do one thing well. Syslog should know what host it’s on, what host it’s transmitting from, and what host its receiving from.

It’s very, very hard to make code secure. When we add in insane options to code, we make it even harder. Sometimes, other people ask us to make the code less secure, and while I’ve already said what I want to say about the FBI asking Apple to fix their mistake by writing new code, this is another example of shooting ourselves in our feet.

Please stop making it harder.

[Update: related “Not-quite-so-broken TLS: lessons in re-engineering a security protocol specification and implementation,” abstracted by the morning paper” which examines an approach to re-implementing TLS, thanks to Steve Bellovin for the pointer.]

Sneak peeks at my new startup at RSA

Confusion

Many executives have been trying to solve the problem of connecting security to the business, and we’re excited about what we’re building to serve this important and unmet need. If you present security with an image like the one above, we may be able to help.

My new startup is getting ready to show our product to friends at RSA. We’re building tools for enterprise leaders to manage their security portfolios. What does that mean? By analogy, if you talk to a financial advisor, they have tools to help you see your total financial picture: assets and debts. They’ll help you break out assets into long term (like a home) or liquid investments (like stocks and bonds) and then further contextualize each as part of your portfolio. There hasn’t been an easy way to model and manage a portfolio of control investments, and we’re building the first.

If you’re interested, we have a few slots remaining for meetings in our suite at RSA! Drop me a line at [first]@[last].org, in a comment or reach out over linkedin.

Kale Caesar

According to the CBC: “McDonald’s kale salad has more calories than a Double Big Mac

NewImage

In a quest to reinvent its image, McDonald’s is on a health kick. But some of its nutrient-enhanced meals are actually comparable to junk food, say some health experts.

One of new kale salads has more calories, fat and sodium than a Double Big Mac.

Apparently, McDonalds is there not to braise kale, but to bury it in cheese and mayonnaise. And while that’s likely mighty tasty, it’s not healthy.

At a short-term level, this looks like good product management. Execs want salads on the menu? Someone’s being measured on sales of new salads, and loading them up with tasty, tasty fats. It’s effective at associating a desirable property of salad with the product.

Longer term, not so much. It breeds cynicism. It undercuts the ability of McDonalds to ever change its image, or to convince people that its food might be a healthy choice.

Superbowls

This is a superb owl, but its feathers are ruffled.Superbowl It is certainly not a metaphor.

Speaking of ruffled feathers, apparently there’s a kerfuffle about Super Bowl 1, where the only extant tape is in private hands, and there’s conflict over what to do with it.

One aspect I haven’t seen covered is that 50 years ago, the tape pre-dates the Bern convention and thus is in the era of requiring copyright notice (and registration.) Was the NFL properly copyrighting its game video back then? If not, does that mean that Mr. Haupt can legally do what he wants, and is chilled by for the threat that Big Football would simply throw lawyers at him until he gives up?

Such threats, at odds with our legally guaranteed right to a speedy trial certainly generate a climate in which large organizations, often governmental ones, can use protracted uncertainty as a weapon against oversight or control. Consider if you will the decade-long, Kafka-esque ordeal of Ms Rahinah Ibrahim, who was on the No Fly list due to a mistake. Consider the emotional and personal cost of not being able to either enter the US, or achieve a sense of closure.

Such a lack of oversight is certainly impacting the people of Flint, Michigan. As Larry Rosenthal points out (first comment), even if, sometime down the line the people of Flint win their case, the doubtless slow and extended trials may grind fine, but wouldn’t it be better if we had a justice system that could deliver justice a little faster?

Anyway, what a superb owl that is.

Cybersecurity Lessons from Star Wars: Blame Vader, Not the IT Department

In “The Galactic Empire Has Terrible Cybersecurity,” Alex Grigsby looks at a number of high-profile failures, covered in “A New Hope” and the rest of the Star Wars canon.

Unfortunately, the approach he takes to the Galactic Empire obscures the larger, more dangerous issue is its cybersecurity culture. There are two errors in Grigsby’s analysis, and they are worth examining. As Yoda once said, “Much to learn you still have.”

Grigsby’s first assumption is that more controls leads to better security. But controls need to be deployed judiciously to allow operations to flow. For example, when you have Stormtroopers patrolling in the Death Star, adding layers of access controls may in fact hamper operations. The Shuttle with outdated keys in Return of the Jedi shows that security issues are rampant, and officers are used to escalations. Security processes that are full of routine escalations desensitize people. They get accustomed to saying OK, and are thus unlikely to give their full attention to each escalation.

The second issue is that Grigsby focuses on a few flaws that have massive impact. The lack of encryption and problematic location of the Death Star’s exhaust port matter not so much as one-offs, but rather reveal the larger security culture at play in the Empire.

There is a singular cause for these failures: Darth Vader. His habit of force choking those who have failed him. The culture of terror that he fosters prevents those under his command from learning from their mistakes and ensures that opportunities for learning will be missed; finger-pointing and blame passing will rule the day. Complaints to the Empire’s human resources department will go unanswered and those who made the complaints probably go missing.

This is the precise opposite of the culture created by Etsy—the online marketplace for handmade and vintage items (including these Star Wars cufflinks). Etsy’s engineers engage in what they call “Blameless Post-Mortems and Just Culture,” where people feel safe coming clean about making mistakes so that they can learn from them. After a problem, engineers are encouraged to write up what happened, why it happened, what they learned, and share that knowledge widely. Executives are committed to not placing blame or finger pointing.

The Empire needs a better way to deal with its mistakes, and so do we. Fortunately, we don’t have to fear Lord Vader and can learn from things that have gone wrong.

For example, the DatalossDB, a project of the non-profit Open Security Foundation, has tracked thousands of incidents that involve the loss, theft of exposure of personally-identifiable information since 2008. The Mercatus Center has analyzed Government Accountability Office data, and found upwards of 60,000 incidents per year for the last two years. Sadly, while we know of these incidents, including what sorts of data was taken and how many victims there were, in many of them, we do not know what happened to a degree of detail that allows us to address the problem. In the first years of public breach reporting (roughly starting in 2004), there were a raft of breaches associated with stolen computers, most of them laptops. All commercial operating systems now ship with full disk encryption software as a result. But that may be the only lesson broadly learned so far.

It’s easy to focus on spectacular incidents like the destruction of a Death Star. It’s easy to look to the mythic aspects of the story. It’s harder to understand what went wrong. Was there an architect who brought up the unshielded thermal exhaust port vulnerability? What happened to the engineering change request? What can we learn from that? Did an intrusion detection analyst notice that unauthorized devices were plugged into the network? Were they overwhelmed by a rash of new devices as the new facility was staffed up?

Even given the very largest breaches, there is often a paucity of information about what went wrong. Sometimes, no one wants to know. Sometimes, it’s a set of finger-pointing. Sometimes, whatever went wrong happened long enough ago that there are no logs. The practice of “Five Whys” analysis is rare.

And when, against all odds, an organization digs in and asks what happened, the lawyers are often there to announce that under no circumstances should it be shown to anyone. After all, there will be lawsuits. (While I am not a lawyer, it seems to me that such lawsuits happen regardless of the existence or availability of a post-mortem report, and a good analysis of what went wrong might be seen as evidence of a mature, learning practice.)

What does not happen, given our fear of lawsuits and other phantom menaces, is learning from mistakes. And so R2-D2 plugs into every USB port in sight, and does so for more than twenty years.

We know from a variety of fields including aircraft safety, nuclear safety, and medical safety that high degrees of safety and security are an outcome of just culture, and willingness to discuss what’s gone wrong. Attention to “near misses” allows organizations to learn faster.

This is what the National Transportation Safety Board does when a plane crashes or a train derails.

We need to get better at post-mortems for cybersecurity. We need to publish them so we can learn the analysis methods others are developing. We need to publish them so we can assess if the conclusions are credible. We need to publish them so we can perform statistical analyses. We need to publish them so that we can do science.

There are many reasons to prevaricate. The First Order — the bad guys in The Force Awakens — can’t afford another Death Star, and we cannot afford to keep doing what we’ve been doing and hoping it will magically get better.

It’s not our only hope, but it certainly would be a new hope.

(Originally appeared on the Council on Foreign Relations Net Politics blog.)