Survey for How to Measure Anything In Cybersecurity Risk

This is a survey from Doug Hubbard, author of How To Measure Anything and he is
currently writing another book with Richard Seiersen (GM of Cyber Security at
GE Healthcare) titled How to Measure Anything in Cybersecurity Risk. As part of
the research for this book, they are asking for your assistance as an
information security professional by taking the following survey. It
is estimated to take 10 to 25 minutes to complete. In return for participating
in this survey, you will be invited to attend a webinar for a sneak-peek on the
book’s content. You will also be given a summary report of this survey. The
survey also includes requests for feedback on the survey itself.

https://www.surveymonkey.com/r/VMHDQ7N

What Good is Threat Intelligence Going to do Against That?

As you may be aware, I’m a fan of using Star Wars for security lessons, such as threat modeling or Saltzer and Schroeder. So I was pretty excited to see Wade Baker post “Luke in the Sky with Diamonds,” talking about threat intelligence, and he gets bonus points for crossover title. And I think it’s important that we see to fixing a hole in their argument.

So…Pardon me for asking, but what good is threat intelligence going to do against that?

In many ways, the diamond that Wade’s built shows a good understanding of the incident. (It may focus overmuch on Jedi Panda, to the practical exclusion of R2-D2, who we all know is the driving force through the movies.) The facts are laid out, they’re organized using the model, and all is well.


Most of my issues boil down to two questions. The first is how could any analysis of the Battle of Yavin fail to mention the crucial role played by Obi Wan Kenobi, and second, what the heck do you do with the data? (And a third, about the Diamond Model itself — how does this model work? Why is a lightsaber a capability, and an X-Wing a bit of infrastructure? Why is The Force counted as a capability, not an adversary to the Dark Side?)

To the first question, that of General Kenobi. As everyone knows, General Kenobi had infiltrated and sabotaged the Death Star that very day. The public breach reports state that “a sophisticated actor” was only able to sabotage a tractor beam controller before being caught, but how do we know that’s all he did? He was on board the station for hours, and could easily have disabled tractor beams that worked in the trenches, or other defenses that have not been revealed. We know that his associate, Yoda, was able to see into the future. We have to assume that they used this ability, and, in using it, created for themselves a set of potential outcomes, only one of which is modeled.

The second question is, okay, we have a model of what went wrong, and what do we do with it? The Death Star has been destroyed, what does all that modeling tell us about the Jedi Panda? About the Fortressa? (Which, I’ll note, is mentioned as infrastructure, but not in the textual analysis.) How do we turn data into action?

Depending on where you stand, it appears that Wade falls into several traps in this post. They are:

  • Adversary modeling and missing something. The analysis misses Ben Kenobi, and it barely touches on the fact that the Rebel Alliance exists. Getting all personal might lead an Imperial Commander to be overly focused on Skywalker, and miss the threat from Lando Calrissian, or other actors, to a second Death Star. Another element which is missed is the relationship between Vader and Skywalker. And while I don’t want to get choked for this, there’s a real issue that the Empire doesn’t handle failure well.
  • Hindsight biases are common — so common that the military has a phenomenon it calls ‘fighting the last war.’ This analysis focuses in on a limited set of actions, the ones which succeeded, but it’s not clear that they’re the ones most worth focusing on.
  • Actionability. This is a real problem for a lot of organizations which get interesting information, but do not yet have the organizational discipline to integrate it into operations effectively.

The issues here are not new. I discussed them in “Modeling Attackers and their Motives,” and I’ll quote myself to close:

Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.

What I don’t know about the Diamond Model is how it does a better job at avoiding the traps and helping those who use it do better than other models. (I’m not saying it’s poor, I’m saying I don’t know and would like to see some empirical work on the subject.)

Towards a model of web browser security

One of the values of models is they can help us engage in areas where otherwise the detail is overwhelming. For example, C is a model of how a CPU works that allows engineers to defer certain details to the compiler, rather than writing in assembler. It empowers software developers to write for many CPU architectures at once. Many security flaws happen in areas the models simplify. For example, what if the stack grew away from the stack pointer, rather than towards it? The layout of the stack is a detail that is modeled away.

Information security is a broad industry, requiring and rewarding specialization. We often see intense specialization, which can naturally result in building expertise in silos, and tribal separation of knowledge create more gaps. At the same time, there is a stereotype that generalists end up focused on policy, or “risk management” where a lack of technical depth can hide. (That’s not to say that all risk managers are generalists or that there’s not real technical depth to some of them.)


If we want to enable more security generalists, and we want those generalists to remain grounded, we need to make it easier to learn about new areas. Part of that is good models, part of that is good exercises that appropriately balance challenge to skill level, part of that is the availability of mentoring, and I’m sure there are other parts I’m missing.

I enjoyed many things about Michael Zalewski’s book “The Tangled Web.” One thing I wanted was a better way to keep track of who attacks whom, to help me contextualize and remember the attacks. But such a model is not trivial to create. This morning, motivated by a conversation between Trey Ford and Chris Rohlf, I decided to take a stab at drafting a model for thinking about where the trust boundaries exist.

The words which open my threat modeling book are “all models are wrong, some models are useful.” I would appreciate feedback on this model. What’s missing, and why does it matter? What attacks require showing a new element in this software model?

Browser security
[Update 1 — please leave comments here, not on Twitter]

  1. Fabio Cerullo suggests the layout engine. It’s not clear to me what additional threats can be seen if you add this explicitly, perhaps because I’m not an expert.
  2. Fernando Montenegro asks about network services such as DNS, which I’m adding and also about shared trust (CA Certs), which overlap with a question about supply chain from Mayer Sharma.
  3. Chris Rohlf points out the https://t.co/FehDXiMeFv“>web browser protection profile.

I could be convinced otherwise, but think that the supply chain is best addressed by a separate model. Having a secure installation and update mechanism is an important mitigation of many types of bugs, but this model is for thinking about the boundaries between the components.

In reviewing the protection profile, it mentions the following threats:

Threat Comment
Malicious updates Out of scope (supply chain)
Malicious/flawed add on Out of scope (supply chain)
Network eavesdropping/attack Not showing all the data flows for simplicity (is this the right call?)
Data access Local storage is shown

Also, the protection profile is 88 pages long, and hard to engage with. While it provides far more detail and allows me to cross-check the software model, it doesn’t help me think about interactions between components.

End update 1]

Adam’s new startup

A conversation with an old friend reminded me that there may be folks who follow this blog, but not the New School blog.

Over there, I’ve posted “Improving Security Effectiveness” about leaving Microsoft to work on my new company:

For the last few months, I’ve been working full time and talking with colleagues about a new way for security executives to measure the effectiveness of security programs. In very important ways, the ideas are new and non-obvious, and at the same time, they’re an evolution of the ideas that Andrew and I wrote about in the New School book that inspired this blog.

and about a job opening, “Seeking a technical leader for my new company:”

We have a new way to measure security effectiveness, and want someone who’ll drive to delivering the technology to customers, while building a great place for developers to ship and deploy important technology. We are very early in the building of the company. The right person will understand such a “green field” represents both opportunity and that we’ll have to build infrastructure as we grow.

This person might be a CTO, they might be a Chief Architect. They are certainly an experienced leader with strong references from peers, management and reports.

The Drama Triangle

As we head into summer conference season, drama is as predictable as vulnerabilities. I’m really not fond of either.

Look Sir Drama

What I am fond of, (other than Star Wars), as someone who spends a lot of time thinking about models, is the model of the “drama triangle.” First discussed by Stephen Karpman, the triangle has three roles, that of victim, persecutor and rescuer:

Drama triangle of victim, rescuer, persecutor


“The Victim-Rescuer-Persecutor Triangle is a psychological model for explaining specific co-dependent, destructive inter-action patterns, which negatively impact our lives. Each position on this triangle has unique, readily identifiable characteristics.” (From “Transcending The Victim-Rescuer-Persecutor Triangle.”)

One of the nifty things about this triangle — and one of the things missing from most popular discussion of it — is how the participants put different labels on the roles they are playing.

For example, a vulnerability researcher may perceive themselves as a rescuer, offering valuable advice to a victim of poor coding practice. Meanwhile, the company sees the researcher as a persecutor, making unreasonable demands of their victim-like self. In their response, the company calls their lawyers and becomes a persecutor, and simultaneously allows the rescuer to shift to the role of victim.

Rescuers (doubtless on Twitter) start popping up to vilify the company’s ham-handed response, pushing the company into perceiving themselves as more of a victim. [Note that I’m not saying that all vulnerability disclosure falls into these traps, or that pressuring vendors is not a useful tool for getting issues fixed. Also, the professionalization of bug finding, and the rise of bug bounty management products can help us avoid the triangle by improving communication, in part by learning to not play these roles.]

I like the “Transcending The Victim-Rescuer-Persecutor Triangle” article because it focuses on how “a person becomes entangled in any one of these positions, they literally keep spinning from one position to another, destroying the opportunity for healthy relationships.”

The first step, if I may, is recognizing and admitting you’re in a drama triangle, and refusing to play the game. There’s a lot more and I encourage you to go read “Transcending The Victim-Rescuer-Persecutor Triangle,” and pay attention to the wisdom therein. If you find the language and approach a little “soft”, then Kellen Von Houser’s “The Drama Triangle: Victims, Rescuers and Persecutors” has eight steps, each discussed in good detail:

  1. Be aware that the game is occurring
  2. Be willing to acknowledge the role or roles you are playing
  3. Be willing to look at the payoffs you get from playing those roles
  4. Disengage
  5. Avoid being sucked into other people’s battles
  6. Take responsibility for your behavior
  7. Breathe

There’s also useful advice at “Manipulation and Relationship Triangles.” I encourage you to spend a few minutes before the big conferences of the summer to think about what the drama triangle means in our professional lives, and see if we can do a little better this year.

[Update: If that’s enough of the wrong drama for you, you can check out “The Security Principles of Saltzer and Schroeder” or my “Threat Modeling Lessons from Star Wars” talk.]

Security Lessons from Healthcare.gov

There’s a great “long read” at CIO, “6 Software Development Lessons From Healthcare.gov’s Failed Launch.” It opens:

This article tries to go further than the typical coverage of Healthcare.gov. The amazing thing about this story isn’t the failure. That was fairly obvious. No, the strange thing is the manner in which often conflicting information is coming out. Writing this piece requires some archeology: Going over facts and looking for inconsistencies to assemble the best information about what’s happened and pinpoint six lessons we might learn from it.

There’s a lot there, and I liked it even before lesson 6 (“Threat Modeling Matters”). Open analysis is generally better.

There’s a question of why this has to be done by someone like Matthew Heusser. No disrespect is intended, but why isn’t Healthcare.gov performing these analyses and sharing them? Part of the problem is that we live in an “outrage world” where it’s easier to point fingers and giggle in 140 characters and hurt people’s lives or careers than it is to make a positive contribution.

It would be great to see project analyses and attempts to learn from more projects that go sideways. But it would also be great to see these for security failures. As I asked in “What Happened At OPM,” we have these major hacks, and we learn nothing at all from them. (Or worse, we learn bad lessons, such as “don’t go looking for breaches.”)

The definition of insanity is doing the same thing over and over and hoping for different results. (Which may includes asking the same question or writing the same blog post over and over, which is why I’m starting a company to improve security effectiveness.)

On Language

I was irked to see a tweet “Learned a new word! Pseudoarboricity: the number of pseudoforests needed to cover a graph. Yes, it is actually a word and so is pseudoforest.” The idea that some letter combinations are “actual words” implies that others are “not actual words,” and thus, that there is some authority who may tell me what letter combinations I am allowed to use or understand.

Balderdash. Adorkable balderdash, but balderdash nonetheless.

As any student of Orwell shall recall, the test of language is its comprehensibility, not its adhesion to some standard. As an author, I sometimes hear from people who believe themselves to be authorities, or who believe that they may select for me authorities as to the meanings of words, and who wish to tell me that my use of the word “threat” threatens their understanding, that the preface’s explicit discussion of the many plain meanings of the word is insufficient, or that my sentences are too long, comma-filled, dash deficient or otherwise Oxfordless in a way which seems to cause them to feel superior to me in a way they wish to, at some length, convey.


In fact, on occasion, they are irked. I recommend to them, and to you, “You Are What You Speak.”

I wish them the best, and fall back, if you’ll so allow, to a comment from another master of language, speaking through one of his characters:

‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’
‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’
‘The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.’

The Web We Have to Save

Hossein Derakhshan was recently released from jail in Iran. He’s written a long and thoughtful article “The Web We Have to Save.” It’s worth reading in full, but here’s an excerpt:

Some of it is visual. Yes, it is true that all my posts on Twitter and Facebook look something similar to a personal blog: They are collected in reverse-chronological order, on a specific webpage, with direct web addresses to each post. But I have very little control over how it looks like; I can’t personalize it much. My page must follow a uniform look which the designers of the social network decide for me.

The centralization of information also worries me because it makes it easier for things to disappear. After my arrest, my hosting service closed my account, because I wasn’t able to pay its monthly fee. But at least I had a backup of all my posts in a database on my own web server. (Most blogging platforms used to enable you to transfer your posts and archives to your own web space, whereas now most platforms don’t let you so.) Even if I didn’t, the Internet archive might keep a copy. But what if my account on Facebook or Twitter is shut down for any reason? Those services themselves may not die any time soon, but it would be not too difficult to imagine a day many American services shut down accounts of anyone who is from Iran, as a result of the current regime of sanctions. If that happened, I might be able to download my posts in some of them, and let’s assume the backup can be easily imported into another platform. But what about the unique web address for my social network profile? Would I be able to claim it back later, after somebody else has possessed it? Domain names switch hands, too, but managing the process is easier and more clear— especially since there is a financial relationship between you and the seller which makes it less prone to sudden and untransparent decisions.

But the scariest outcome of the centralization of information in the age of social networks is something else: It is making us all much less powerful in relation to governments and corporations.

Ironically, I tweeted a link, but I think I’m going to try to go back to more blogging, even if the content might fit somewhere else. Hossein’s right. There’s a web here, and we should work to save it.

(Previous mentions of Hossein: Hoder’s Denial“, “Free Hossein Derakhshan.”)

What Happened At OPM?

I want to discuss some elements of the OPM breach and what we know and what we don’t. Before I do, I want to acknowledge the tremendous and justified distress that those who’ve filled out the SF-86 form are experiencing. I also want to acknowledge the tremendous concern that those who employ those with clearances must be feeling. The form is designed as an (inverted) roadmap to suborning people, and now all that data is in the hands of a foreign intelligence service.

The National Journal published A Timeline of Government Data Breaches:
OPM Data Breach

I asked after the root cause, and Rich Bejtlich responded “The root cause is a focus on locking doors and windows while intruders are still in the house” with a pointer to his “Continuous Diagnostic Monitoring Does Not Detect Hackers.”

And while I agree with Richard’s point in that post, I don’t think that’s the root cause. When I think about root cause, I think about approaches like Five Whys or Ishikawa. If we apply this sort of approach then we can ask, “Why were foreigners able to download the OPM database?” There are numerous paths that we might take, for example:

  1. Because of a lack of two-factor authentication (2FA)
  2. Why? Because some critical systems at OPM don’t support 2FA.
  3. Why? Because of a lack of budget for upgrades & testing (etc)

Alternately, we might go down a variety of paths based on the Inspector General Report. We might consider Richard’s point:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because someone there knows how to lock doors and windows.
  3. Why? Because lots of organizations hire out of government agencies.
  4. Why? Because they pay better
  5. [Alternate] Employees don’t like the clearance process

But we can go down alternate paths:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because finding intruders in the house is hard, and people often miss those stealthy attackers.
  3. Why? Because networks are chaotic and change frequently
  4. [Alternate] Because not enough people publish lists of IoCs, so defenders don’t know what to look for.

What I’d really like to see are specific technical facts laid out. (Or heck, if the facts are unknowable because logs rotated or attackers deleted them, or we don’t even know why we can’t know, let’s talk about that, and learn from it.)

OPM and Katherine Archuleta have already been penalized. Let’s learn things beyond dates. Let’s put the facts out there, or, as I quoted in my last post we “should declare the causes which impel them to the separation”, or “let Facts be submitted to a candid world.” Once we have facts about the causes, we can perform deeper root cause analysis.

I don’t think that the OIG report contains those causes. Each of those audit failings might play one of several roles. The failing might have been causal, and fixing it would have stopped the attack. The failing might have been casual and the attacker would have worked around it. The failing might be irrelevant (for example, I’ve rarely seen an authorization to operate prevent an attack, unless you fold it up very small and glue it into a USB port). The failings might even have been distracting, taking attention away from other work that might have prevented the attack.


A collection of public facts would enable us to have a discussion about those possibilities. We could have a meta-conversation about those categorizations of failings, and if there’s other ones which make more sense.

Alternately, we can keep going the way we’re going.

So. What happened at OPM?

Wassenaar Restrictions on Speech

[There are broader critiques by Katie Moussouris of HackerOne at “Legally Blind and Deaf – How Computer Crime Laws Silence Helpful Hackers” and Halvar Flake at “Why changes to Wassenaar make oppression and surveillance easier, not harder.” This post addresses the free speech issue.]

During the first crypto wars, cryptography was regulated under the US ITAR regulations as a dual use item, and to export strong crypto (and thus, economically to include it in a generally available commercial or open source product) was effectively impossible.

A principle of our successful work to overcome those restrictions was that code is speech. Thus restrictions on code are restrictions on speech. The legal incoherence of the regulations was brought to an unavoidable crises by Phil Karn, who submitted both the book Applied Cryptography and a floppy disk with the source code from the book for an export license. The book received a license, the disk did not. This was obviously incoherent and Kafka-esque. At the time, American acceptance of incoherent, Kafka-esque rules was in much shorter supply.

Now, the new Wassenaar rules appear to contain restrictions on the export of a different type of code (page 209, category 4, see after the jump). (FX drew attention to this issue in this tweet. [Apparently, I wrote this in Jan, 2014, and forgot to hit post.])

A principle of our work was that code is speech. Thus restrictions on code are restrictions on speech. (Stop me if you’ve heard this one before.) I put forth several tweets that contain PoC I was able to type from memory, each of which, I believe, in principle, could violate the Wassenaar rules. For example:

  • rlogin -froot $target
  • echo wiz | nc $target 25

It would be nice if someone would file for the paperwork to export them on paper.

In this tweet, I’m not speaking for my employer or yours. I am speaking for poor, tired and hungry cryptographers, yearning to breathe free, and to not live on groundhog day.

Continue reading