Governance Lessons from the Death Star Architect

I had not seen this excellent presentation by the engineer who built the Death Star’s exhaust system.

In it, he discusses the need to disperse energy from a battle station with the power draw to destroy planets, and the engineering goals he had to balance.

I’m reminded again of “The Evolution of Useful Things” and how it applies to security. Security engineering involves making tradeoffs, and those tradeoffs sometimes have unfortunate results. Threat modeling is a family of techniques for thinking about the tradeoffs and what’s likely to go wrong. Doing it well means that things are less likely to go wrong, not that nothing ever will.

It’s easy, after the fact, to point out the problem with the exhaust ports. But as your risk management governance improves, you get to the point of asking “what did we know when we made these decisions?” and “could we have made these decisions better?”

At the engineering level, you want to develop a cybersecurity culture that’s open to discussing failures, not one in which you have to fear being force-choked. (More on that topic in my guest post at the Council on Foreign Relations, “Cybersecurity Lessons from Star Wars: Blame Vader, Not the IT Department.”)

More broadly, organizational leadership needs to focus on questions about appropriate policy and governance being in place. That sounds jargony, so let me unpack it a little. Policy is what you intend to do: such as perform risk analysis that lets executives make good risk management decisions about the competing aspects of the business. Is a PHP vuln acceptable? If it happened to be in the Force Awakened site this week, taking that site down would have been an expensive choice. It’s tempting to ask what geek would do more than add a comment? And that gets into questions of attacker motivation, and it’s easy to get it wrong. Even Star Wars has critics (one minute video, worth sharing for the reveal at the end):

If policy is about knowing what you intend to do in a way that lets people do it, governance is about making sure it happens properly. There are all sorts of reasons that it’s hard to map technology risk to business risk. Tech risk involves the bad things which might happen, and the interesting ways technologies are tightly woven make it hard to say, a priori, that an exhaust port technical issue might have a bad business impact, or that an HVAC system having a bad password might lead to a bad business impact.

Exhaust is likely to generate turbulence in an exhaust shaft, and that such turbulence will act as a compensating control for a lack of port shielding. That is, whatever substrate carries heat will do so unevenly, and in a shaft the size of a womp rat, that turbulence will batter any projectile into exploding somewhere less harmful.

A good policy will ask for such analysis, a good governance process will ask if it happened, and, after a failure, if the failure is likely to happen again. We need to help executives form the questions, and we need to do a better job at supplying answers.

Open Letters to Security Vendors

John Masserini has a set of “open letters to security vendors” on Security Current.

Everyone involved in product or sales at a security startup should read them. John provides insight into what it’s like to be pitched by too many startups, and provides a level of transparency that’s sadly hard to find. Personally, I learned a great deal about what happens when you’re pitched while I was at a large company, and I can vouch for the realities he puts forth. The sooner you understand those realities and incorporate them into your thinking, the more successful we’ll all be.

After meeting with dozens of startups at Black Hat a few weeks ago, I’ve realized that the vast majority of the leaders of these new companies struggle to articulate the value their solutions bring to the enterprise.

Why does John’s advice make us all more successful? Because each organization that follows it moves towards a more efficient state, for themselves and for the folks who they’re pitching.

Getting more efficient means you waste less time per prospect. When you focus on qualified leads who care about the problem you’re working on, you get more sales per unit of time. What’s more, by not wasting the time of those who won’t buy, you free up their time for talking to those who might have something to provide them. (One banker I know said “I could hire someone full-time to reject startup pitches.” Think about what that means for your sales cycle for a moment.)

Go read “An Open Letter to Security Vendors” along with part 2 (why sales takes longer) and part 3 (the technology challenges most startups ignore).

The Evolution of Secure Things

One of the most interesting security books I’ve read in a while barely mentions computers or security. The book is Petroski’s The Evolution of Useful Things.

Evolution Of useful Things Book Cover

As the subtitle explains, the book discusses “How Everyday Artifacts – From Forks and Pins to Paper Clips and Zippers – Came to be as They are.”

The chapter on the fork is a fine example of the construction of the book.. The book traces its evolution from a two-tined tool useful for holding meat as it was cut to the 4 tines we have today. Petroski documents the many variants of forks which were created, and how each was created with reference to the perceived failings of previous designs. The first designs were useful for holding meat as you cut it, before transferring it to your mouth with the knife. Later designs were unable to hold peas, extract an oyster, cut pastry, or meet a variety of other goals that diners had. Those goals acted as evolutionary pressures, and drove innovators to create new forms of the fork.

Not speaking of the fork, but rather of newer devices, Petroski writes:

Why designers do not get things right the first time may be more understandable than excusable. Whether electronics designers pay less attention to how their devices will be operated, or whether their familiarity with the electronic guts of their own little monsters hardens them against these monsters’ facial expressions, there is a consensus among consumers and reflective critics like Donald Norman, who has characterized “usable design” as the “next competitive frontier,” that things seldom live up to their promise. Norman states flatly, “Warning labels and large instruction manuals are signs of failures, attempts to patch up problems that should have been avoided by proper design in the first place.” He is correct, of course, but how is it that designers have, almost to a person, been so myopic?

So what does this have to do with security?

(No, it’s not “stick a fork in it, it’s done fer.”)

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

In security, we ask for perfection against an ill-defined and ever-growing list of hard-to-understand properties, such as “double-free safety.”

Computer security is in a process of moving from expressing “security” to expressing more precise goals, and the evolution of useful tools for finding, naming, and discussing vulnerabilities will help us express what we want in secure software.

The various manifestations of failure, as have been articulated in case studies throughout this book, provide the conceptual underpinning for understanding the evolving form of artifacts and the fabric of technology into which they are inextricably woven. It is clearly the perception of failure in existing technology that drives inventors, designers, and engineers to modify what others may find perfectly adequate, or at least usable. What constitutes failure and what improvement is not totally objective, for in the final analysis a considerable list of criteria, ranging from the functional to the aesthetic, from the economic to the moral, can come into play. Nevertheless, each criterion must be judged in a context of failure, which, though perhaps much easier than success to quantify, will always retain an aspect of subjectivity. The spectrum of subjectivity may appear to narrow to a band of objectivity within the confines of disciplinary discussion, but when a diversity of individuals and groups comes together to discuss criteria of success and failure, consensus can be an elusive state.

Even if you’ve previously read it, re-reading it from a infosec perspective is worthwhile. Highly recommended.

[As I was writing this, Ben Hughes wrote a closely related post on the practical importance of tradeoffs, “A Dockery of a Sham.”]

Phishing and Clearances

Apparently, the CISO of US Homeland Security, a Paul Beckman, said that:

“Someone who fails every single phishing campaign in the world should not be holding a TS SCI [top secret, sensitive compartmentalized information—the highest level of security clearance] with the federal government” (Paul Beckman, quoted in Ars technica)

Now, I’m sure being in the government and trying to defend against phishing attacks is a hard problem, and I don’t want to ignore that real frustration. At the same time, GAO found that the government is having trouble hiring cybersecurity experts, and that was before the SF-86 leak.

Removing people’s clearances is one repsonse. It’s not clear from the text if these are phishing (strictly defined, an attempt to get usernames and passwords), or malware attached to the emails.

In each case, there are other fixes. The first would be multi-factor authentication for government logins. This was the subject of a push, and if agencies aren’t at 100%, maybe getting there is better than punitive action. Another fix could be to use an email client which makes seeing phishing emails easier. For example, an email client could display the RFC-822 sender address (eg, “<>” for any email address that that email client hasn’t sent email to, rather than the friendly text. They could provide password management software with built-in anti-phishing (checking the domain before submitting the password. They could, I presume, do other things which minimize the request on the human being.

When Rob Reeder, Ellen Cram Kowalczyk and I created the “NEAT” guidance for usable security, we didn’t make “Necessary” first just because the acronym is neater that way, we put it first because the poor person is usually overwhelmed, and they deserve to have software make the decisions that software can make. Angela Sasse called this the ‘compliance budget,’ and it’s not a departmental budget, it’s a human one. My understanding is that those who work for the government already have enough things drawing on that budget. Making people anxious that they’ll lose their clearance and have to take a higher-paying private sector job should not be one of them.

Survey for How to Measure Anything In Cybersecurity Risk

This is a survey from Doug Hubbard, author of How To Measure Anything and he is
currently writing another book with Richard Seiersen (GM of Cyber Security at
GE Healthcare) titled How to Measure Anything in Cybersecurity Risk. As part of
the research for this book, they are asking for your assistance as an
information security professional by taking the following survey. It
is estimated to take 10 to 25 minutes to complete. In return for participating
in this survey, you will be invited to attend a webinar for a sneak-peek on the
book’s content. You will also be given a summary report of this survey. The
survey also includes requests for feedback on the survey itself.

What Good is Threat Intelligence Going to do Against That?

As you may be aware, I’m a fan of using Star Wars for security lessons, such as threat modeling or Saltzer and Schroeder. So I was pretty excited to see Wade Baker post “Luke in the Sky with Diamonds,” talking about threat intelligence, and he gets bonus points for crossover title. And I think it’s important that we see to fixing a hole in their argument.

So…Pardon me for asking, but what good is threat intelligence going to do against that?

In many ways, the diamond that Wade’s built shows a good understanding of the incident. (It may focus overmuch on Jedi Panda, to the practical exclusion of R2-D2, who we all know is the driving force through the movies.) The facts are laid out, they’re organized using the model, and all is well.

Most of my issues boil down to two questions. The first is how could any analysis of the Battle of Yavin fail to mention the crucial role played by Obi Wan Kenobi, and second, what the heck do you do with the data? (And a third, about the Diamond Model itself — how does this model work? Why is a lightsaber a capability, and an X-Wing a bit of infrastructure? Why is The Force counted as a capability, not an adversary to the Dark Side?)

To the first question, that of General Kenobi. As everyone knows, General Kenobi had infiltrated and sabotaged the Death Star that very day. The public breach reports state that “a sophisticated actor” was only able to sabotage a tractor beam controller before being caught, but how do we know that’s all he did? He was on board the station for hours, and could easily have disabled tractor beams that worked in the trenches, or other defenses that have not been revealed. We know that his associate, Yoda, was able to see into the future. We have to assume that they used this ability, and, in using it, created for themselves a set of potential outcomes, only one of which is modeled.

The second question is, okay, we have a model of what went wrong, and what do we do with it? The Death Star has been destroyed, what does all that modeling tell us about the Jedi Panda? About the Fortressa? (Which, I’ll note, is mentioned as infrastructure, but not in the textual analysis.) How do we turn data into action?

Depending on where you stand, it appears that Wade falls into several traps in this post. They are:

  • Adversary modeling and missing something. The analysis misses Ben Kenobi, and it barely touches on the fact that the Rebel Alliance exists. Getting all personal might lead an Imperial Commander to be overly focused on Skywalker, and miss the threat from Lando Calrissian, or other actors, to a second Death Star. Another element which is missed is the relationship between Vader and Skywalker. And while I don’t want to get choked for this, there’s a real issue that the Empire doesn’t handle failure well.
  • Hindsight biases are common — so common that the military has a phenomenon it calls ‘fighting the last war.’ This analysis focuses in on a limited set of actions, the ones which succeeded, but it’s not clear that they’re the ones most worth focusing on.
  • Actionability. This is a real problem for a lot of organizations which get interesting information, but do not yet have the organizational discipline to integrate it into operations effectively.

The issues here are not new. I discussed them in “Modeling Attackers and their Motives,” and I’ll quote myself to close:

Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.

What I don’t know about the Diamond Model is how it does a better job at avoiding the traps and helping those who use it do better than other models. (I’m not saying it’s poor, I’m saying I don’t know and would like to see some empirical work on the subject.)

Towards a model of web browser security

One of the values of models is they can help us engage in areas where otherwise the detail is overwhelming. For example, C is a model of how a CPU works that allows engineers to defer certain details to the compiler, rather than writing in assembler. It empowers software developers to write for many CPU architectures at once. Many security flaws happen in areas the models simplify. For example, what if the stack grew away from the stack pointer, rather than towards it? The layout of the stack is a detail that is modeled away.

Information security is a broad industry, requiring and rewarding specialization. We often see intense specialization, which can naturally result in building expertise in silos, and tribal separation of knowledge create more gaps. At the same time, there is a stereotype that generalists end up focused on policy, or “risk management” where a lack of technical depth can hide. (That’s not to say that all risk managers are generalists or that there’s not real technical depth to some of them.)

If we want to enable more security generalists, and we want those generalists to remain grounded, we need to make it easier to learn about new areas. Part of that is good models, part of that is good exercises that appropriately balance challenge to skill level, part of that is the availability of mentoring, and I’m sure there are other parts I’m missing.

I enjoyed many things about Michael Zalewski’s book “The Tangled Web.” One thing I wanted was a better way to keep track of who attacks whom, to help me contextualize and remember the attacks. But such a model is not trivial to create. This morning, motivated by a conversation between Trey Ford and Chris Rohlf, I decided to take a stab at drafting a model for thinking about where the trust boundaries exist.

The words which open my threat modeling book are “all models are wrong, some models are useful.” I would appreciate feedback on this model. What’s missing, and why does it matter? What attacks require showing a new element in this software model?

Browser security
[Update 1 — please leave comments here, not on Twitter]

  1. Fabio Cerullo suggests the layout engine. It’s not clear to me what additional threats can be seen if you add this explicitly, perhaps because I’m not an expert.
  2. Fernando Montenegro asks about network services such as DNS, which I’m adding and also about shared trust (CA Certs), which overlap with a question about supply chain from Mayer Sharma.
  3. Chris Rohlf points out the “web browser protection profile.

I could be convinced otherwise, but think that the supply chain is best addressed by a separate model. Having a secure installation and update mechanism is an important mitigation of many types of bugs, but this model is for thinking about the boundaries between the components.

In reviewing the protection profile, it mentions the following threats:

Threat Comment
Malicious updates Out of scope (supply chain)
Malicious/flawed add on Out of scope (supply chain)
Network eavesdropping/attack Not showing all the data flows for simplicity (is this the right call?)
Data access Local storage is shown

Also, the protection profile is 88 pages long, and hard to engage with. While it provides far more detail and allows me to cross-check the software model, it doesn’t help me think about interactions between components.

End update 1]

Adam’s new startup

A conversation with an old friend reminded me that there may be folks who follow this blog, but not the New School blog.

Over there, I’ve posted “Improving Security Effectiveness” about leaving Microsoft to work on my new company:

For the last few months, I’ve been working full time and talking with colleagues about a new way for security executives to measure the effectiveness of security programs. In very important ways, the ideas are new and non-obvious, and at the same time, they’re an evolution of the ideas that Andrew and I wrote about in the New School book that inspired this blog.

and about a job opening, “Seeking a technical leader for my new company:”

We have a new way to measure security effectiveness, and want someone who’ll drive to delivering the technology to customers, while building a great place for developers to ship and deploy important technology. We are very early in the building of the company. The right person will understand such a “green field” represents both opportunity and that we’ll have to build infrastructure as we grow.

This person might be a CTO, they might be a Chief Architect. They are certainly an experienced leader with strong references from peers, management and reports.

The Drama Triangle

As we head into summer conference season, drama is as predictable as vulnerabilities. I’m really not fond of either.

Look Sir Drama

What I am fond of, (other than Star Wars), as someone who spends a lot of time thinking about models, is the model of the “drama triangle.” First discussed by Stephen Karpman, the triangle has three roles, that of victim, persecutor and rescuer:

Drama triangle of victim, rescuer, persecutor

“The Victim-Rescuer-Persecutor Triangle is a psychological model for explaining specific co-dependent, destructive inter-action patterns, which negatively impact our lives. Each position on this triangle has unique, readily identifiable characteristics.” (From “Transcending The Victim-Rescuer-Persecutor Triangle.”)

One of the nifty things about this triangle — and one of the things missing from most popular discussion of it — is how the participants put different labels on the roles they are playing.

For example, a vulnerability researcher may perceive themselves as a rescuer, offering valuable advice to a victim of poor coding practice. Meanwhile, the company sees the researcher as a persecutor, making unreasonable demands of their victim-like self. In their response, the company calls their lawyers and becomes a persecutor, and simultaneously allows the rescuer to shift to the role of victim.

Rescuers (doubtless on Twitter) start popping up to vilify the company’s ham-handed response, pushing the company into perceiving themselves as more of a victim. [Note that I’m not saying that all vulnerability disclosure falls into these traps, or that pressuring vendors is not a useful tool for getting issues fixed. Also, the professionalization of bug finding, and the rise of bug bounty management products can help us avoid the triangle by improving communication, in part by learning to not play these roles.]

I like the “Transcending The Victim-Rescuer-Persecutor Triangle” article because it focuses on how “a person becomes entangled in any one of these positions, they literally keep spinning from one position to another, destroying the opportunity for healthy relationships.”

The first step, if I may, is recognizing and admitting you’re in a drama triangle, and refusing to play the game. There’s a lot more and I encourage you to go read “Transcending The Victim-Rescuer-Persecutor Triangle,” and pay attention to the wisdom therein. If you find the language and approach a little “soft”, then Kellen Von Houser’s “The Drama Triangle: Victims, Rescuers and Persecutors” has eight steps, each discussed in good detail:

  1. Be aware that the game is occurring
  2. Be willing to acknowledge the role or roles you are playing
  3. Be willing to look at the payoffs you get from playing those roles
  4. Disengage
  5. Avoid being sucked into other people’s battles
  6. Take responsibility for your behavior
  7. Breathe

There’s also useful advice at “Manipulation and Relationship Triangles.” I encourage you to spend a few minutes before the big conferences of the summer to think about what the drama triangle means in our professional lives, and see if we can do a little better this year.

[Update: If that’s enough of the wrong drama for you, you can check out “The Security Principles of Saltzer and Schroeder” or my “Threat Modeling Lessons from Star Wars” talk.]

Security Lessons from

There’s a great “long read” at CIO, “6 Software Development Lessons From’s Failed Launch.” It opens:

This article tries to go further than the typical coverage of The amazing thing about this story isn’t the failure. That was fairly obvious. No, the strange thing is the manner in which often conflicting information is coming out. Writing this piece requires some archeology: Going over facts and looking for inconsistencies to assemble the best information about what’s happened and pinpoint six lessons we might learn from it.

There’s a lot there, and I liked it even before lesson 6 (“Threat Modeling Matters”). Open analysis is generally better.

There’s a question of why this has to be done by someone like Matthew Heusser. No disrespect is intended, but why isn’t performing these analyses and sharing them? Part of the problem is that we live in an “outrage world” where it’s easier to point fingers and giggle in 140 characters and hurt people’s lives or careers than it is to make a positive contribution.

It would be great to see project analyses and attempts to learn from more projects that go sideways. But it would also be great to see these for security failures. As I asked in “What Happened At OPM,” we have these major hacks, and we learn nothing at all from them. (Or worse, we learn bad lessons, such as “don’t go looking for breaches.”)

The definition of insanity is doing the same thing over and over and hoping for different results. (Which may includes asking the same question or writing the same blog post over and over, which is why I’m starting a company to improve security effectiveness.)

On Language

I was irked to see a tweet “Learned a new word! Pseudoarboricity: the number of pseudoforests needed to cover a graph. Yes, it is actually a word and so is pseudoforest.” The idea that some letter combinations are “actual words” implies that others are “not actual words,” and thus, that there is some authority who may tell me what letter combinations I am allowed to use or understand.

Balderdash. Adorkable balderdash, but balderdash nonetheless.

As any student of Orwell shall recall, the test of language is its comprehensibility, not its adhesion to some standard. As an author, I sometimes hear from people who believe themselves to be authorities, or who believe that they may select for me authorities as to the meanings of words, and who wish to tell me that my use of the word “threat” threatens their understanding, that the preface’s explicit discussion of the many plain meanings of the word is insufficient, or that my sentences are too long, comma-filled, dash deficient or otherwise Oxfordless in a way which seems to cause them to feel superior to me in a way they wish to, at some length, convey.

In fact, on occasion, they are irked. I recommend to them, and to you, “You Are What You Speak.”

I wish them the best, and fall back, if you’ll so allow, to a comment from another master of language, speaking through one of his characters:

‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’
‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’
‘The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.’

The Web We Have to Save

Hossein Derakhshan was recently released from jail in Iran. He’s written a long and thoughtful article “The Web We Have to Save.” It’s worth reading in full, but here’s an excerpt:

Some of it is visual. Yes, it is true that all my posts on Twitter and Facebook look something similar to a personal blog: They are collected in reverse-chronological order, on a specific webpage, with direct web addresses to each post. But I have very little control over how it looks like; I can’t personalize it much. My page must follow a uniform look which the designers of the social network decide for me.

The centralization of information also worries me because it makes it easier for things to disappear. After my arrest, my hosting service closed my account, because I wasn’t able to pay its monthly fee. But at least I had a backup of all my posts in a database on my own web server. (Most blogging platforms used to enable you to transfer your posts and archives to your own web space, whereas now most platforms don’t let you so.) Even if I didn’t, the Internet archive might keep a copy. But what if my account on Facebook or Twitter is shut down for any reason? Those services themselves may not die any time soon, but it would be not too difficult to imagine a day many American services shut down accounts of anyone who is from Iran, as a result of the current regime of sanctions. If that happened, I might be able to download my posts in some of them, and let’s assume the backup can be easily imported into another platform. But what about the unique web address for my social network profile? Would I be able to claim it back later, after somebody else has possessed it? Domain names switch hands, too, but managing the process is easier and more clear— especially since there is a financial relationship between you and the seller which makes it less prone to sudden and untransparent decisions.

But the scariest outcome of the centralization of information in the age of social networks is something else: It is making us all much less powerful in relation to governments and corporations.

Ironically, I tweeted a link, but I think I’m going to try to go back to more blogging, even if the content might fit somewhere else. Hossein’s right. There’s a web here, and we should work to save it.

(Previous mentions of Hossein: Hoder’s Denial“, “Free Hossein Derakhshan.”)

What Happened At OPM?

I want to discuss some elements of the OPM breach and what we know and what we don’t. Before I do, I want to acknowledge the tremendous and justified distress that those who’ve filled out the SF-86 form are experiencing. I also want to acknowledge the tremendous concern that those who employ those with clearances must be feeling. The form is designed as an (inverted) roadmap to suborning people, and now all that data is in the hands of a foreign intelligence service.

The National Journal published A Timeline of Government Data Breaches:
OPM Data Breach

I asked after the root cause, and Rich Bejtlich responded “The root cause is a focus on locking doors and windows while intruders are still in the house” with a pointer to his “Continuous Diagnostic Monitoring Does Not Detect Hackers.”

And while I agree with Richard’s point in that post, I don’t think that’s the root cause. When I think about root cause, I think about approaches like Five Whys or Ishikawa. If we apply this sort of approach then we can ask, “Why were foreigners able to download the OPM database?” There are numerous paths that we might take, for example:

  1. Because of a lack of two-factor authentication (2FA)
  2. Why? Because some critical systems at OPM don’t support 2FA.
  3. Why? Because of a lack of budget for upgrades & testing (etc)

Alternately, we might go down a variety of paths based on the Inspector General Report. We might consider Richard’s point:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because someone there knows how to lock doors and windows.
  3. Why? Because lots of organizations hire out of government agencies.
  4. Why? Because they pay better
  5. [Alternate] Employees don’t like the clearance process

But we can go down alternate paths:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because finding intruders in the house is hard, and people often miss those stealthy attackers.
  3. Why? Because networks are chaotic and change frequently
  4. [Alternate] Because not enough people publish lists of IoCs, so defenders don’t know what to look for.

What I’d really like to see are specific technical facts laid out. (Or heck, if the facts are unknowable because logs rotated or attackers deleted them, or we don’t even know why we can’t know, let’s talk about that, and learn from it.)

OPM and Katherine Archuleta have already been penalized. Let’s learn things beyond dates. Let’s put the facts out there, or, as I quoted in my last post we “should declare the causes which impel them to the separation”, or “let Facts be submitted to a candid world.” Once we have facts about the causes, we can perform deeper root cause analysis.

I don’t think that the OIG report contains those causes. Each of those audit failings might play one of several roles. The failing might have been causal, and fixing it would have stopped the attack. The failing might have been casual and the attacker would have worked around it. The failing might be irrelevant (for example, I’ve rarely seen an authorization to operate prevent an attack, unless you fold it up very small and glue it into a USB port). The failings might even have been distracting, taking attention away from other work that might have prevented the attack.

A collection of public facts would enable us to have a discussion about those possibilities. We could have a meta-conversation about those categorizations of failings, and if there’s other ones which make more sense.

Alternately, we can keep going the way we’re going.

So. What happened at OPM?

The Unanimous Declaration of the Thirteen United States of America


In CONGRESS, July 4, 1776

The unanimous Declaration of the thirteen united States of America,

When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature’s God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation.

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. –That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, –That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security. —Such has been the patient sufferance of these Colonies; and such is now the necessity which constrains them to alter their former Systems of Government. The history of the present King of Great Britain [George III] is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid world.

Continue reading

Wassenaar Restrictions on Speech

[There are broader critiques by Katie Moussouris of HackerOne at “Legally Blind and Deaf – How Computer Crime Laws Silence Helpful Hackers” and Halvar Flake at “Why changes to Wassenaar make oppression and surveillance easier, not harder.” This post addresses the free speech issue.]

During the first crypto wars, cryptography was regulated under the US ITAR regulations as a dual use item, and to export strong crypto (and thus, economically to include it in a generally available commercial or open source product) was effectively impossible.

A principle of our successful work to overcome those restrictions was that code is speech. Thus restrictions on code are restrictions on speech. The legal incoherence of the regulations was brought to an unavoidable crises by Phil Karn, who submitted both the book Applied Cryptography and a floppy disk with the source code from the book for an export license. The book received a license, the disk did not. This was obviously incoherent and Kafka-esque. At the time, American acceptance of incoherent, Kafka-esque rules was in much shorter supply.

Now, the new Wassenaar rules appear to contain restrictions on the export of a different type of code (page 209, category 4, see after the jump). (FX drew attention to this issue in this tweet. [Apparently, I wrote this in Jan, 2014, and forgot to hit post.])

A principle of our work was that code is speech. Thus restrictions on code are restrictions on speech. (Stop me if you’ve heard this one before.) I put forth several tweets that contain PoC I was able to type from memory, each of which, I believe, in principle, could violate the Wassenaar rules. For example:

  • rlogin -froot $target
  • echo wiz | nc $target 25

It would be nice if someone would file for the paperwork to export them on paper.

In this tweet, I’m not speaking for my employer or yours. I am speaking for poor, tired and hungry cryptographers, yearning to breathe free, and to not live on groundhog day.

Continue reading