Diagrams in Threat Modeling

When I think about how to threat model well, one of the elements that is most important is how much people need to keep in their heads, the cognitive load if you will.

In reading Charlie Stross’s blog post, “Writer, Interrupted” this paragraph really jumped out at me:

One thing that coding and writing fiction have in common is that both tasks require the participant to hold huge amounts of information in their head, in working memory. In the case of the programmer, they may be tracing a variable or function call through the context of a project distributed across many source files, and simultaneously maintaining awareness of whatever complex APIs the object of their attention is interacting with. In the case of the author, they may be holding a substantial chunk of the plot of a novel (or worse, an entire series) in their head, along with a model of the mental state of the character they’re focussing on, and a list of secondary protagonists, while attempting to ensure that the individual sentence they’re currently crafting is consistent with the rest of the body of work.

One of the reasons that I’m fond of diagrams is that they allow the threat modelers to migrate information out of their heads into a diagram, making room for thinking about threats.

Lately, I’ve been thinking a lot about threat modeling tools, including some pretty interesting tools for automated discovery of existing architecture from code. That’s pretty neat, and it dramatically cuts the cost of getting started. Reducing effort, or cost, is inherently good. Sometimes, the reduction in effort is an unalloyed good, that is, any tradeoffs are so dwarfed by benefits as to be unarguable. Sometimes, you lose things that might be worth keeping, either as a hobby like knitting or in the careful chef preparing a fine meal.

I think a lot about where drawing diagrams on a whiteboard falls. It has a cost, and that cost can be high. “Assemble a team of architect, developer, test lead, business analyst, operations and networking” reads one bit of advice. That’s a lot of people for a cross-functional meeting.

That meeting can be a great way to find disconnects in what people conceive of building. And there’s a difference between drawing a diagram and being handed a diagram. I want to draw that out a little bit and ask for your help in understanding the tradeoffs and when they might and might not be appropriate. (Gary McGraw is fond of saying that getting these people in a room and letting them argue is the most important step in “architectural risk analysis.” I think it’s tremendously valuable, and having structures, tools and methods to help them avoid ratholes and path dependency is a big win.)

So what are the advantages and disadvantages of each?

Whiteboard

  • Collaboration. Walking to the whiteboard and picking up a marker is far less intrusive than taking someone’s computer, or starting to edit a document in a shared tool.
  • Ease of use. A whiteboard is still easier than just about any other drawing tool.
  • Discovery of different perspective/belief. This is a little subtle. If I’m handed a diagram, I’m less likely to object. An objection may contain a critique of someone else’s work, it may be a conflict. As something is being drawn on a whiteboard, it seems easier to say “what about the debug interface?” (This ties back to Gary McGraw’s point.)
  • Storytelling. It is easier to tell a story standing next to a whiteboard than any tech I’ve used. A large whiteboard diagram is easy to point at. You’re not blocking the projector. You can easily edit as you’re talking.
  • Messy writing/what does that mean? We’ve all been there? Someone writes something in shorthand as a conversation is happening, and either you can’t read it or you can’t understand what was meant. Structured systems encourage writing a few more words, making things more tedious for everyone around.

Software Tools

  • Automatic analysis. Tools like the Microsoft Threat Modeling tool can give you a baseline set of threats to which you add detail. Structure is a tremendous aid to getting things done, and in threat modeling, it helps in answering “what could go wrong?”
  • Authority/decidedness/fixedness. This is the other side of the discovery coin. Sometimes, there are architectural answers, and those answers are reasonably fixed. For example, hardware accesses are mediated by the kernel, and filesystem and network are abstracted there. (More recent kernels offer filesystems in userland, but that change was discussed in detail.) Similarly, I’ve seen large, complex systems with overall architecture diagrams, and a change to these diagrams had to be discussed and approved in advance. If this is the case, then a fixed diagram, printed poster size and affixed to walls, can also be used in threat modeling meetings as a context diagram. No need to re-draw it as a DFD.
  • Photographs of whiteboards are hard to archive and search without further processing.
  • Photographs of whiteboards may imply that ‘this isn’t very important.” If you have a really strong culture of “just barely good enough” than this might not be the case, but if other documents are more structured or cared for, then photos of a whiteboard may carry a message.
  • Threat modeling only late. If you’re going to get architecture from code, then you may not think about it until the code is written. If you weren’t going to threat model anyway, then this is a win, but if there was a reasonable chance you were going to do the architectural analysis while there was a chance to change the architecture, software tools may take that away.

(Of course, there are apps that help you take images from a whiteboard and improve them, for example, Best iOS OCR Scanning Apps, which I’m ignoring for purposes of teasing things out a bit. Operationally, probably worth digging into.)

I’d love your thoughts: are there other advantages or disadvantages of a whiteboard or software?

What does the MS Secure Boot Issue teach us about key escrow?

Nothing.

No, seriously. Articles like “Microsoft Secure Boot key debacle causes security panic” and “Bungling Microsoft singlehandedly proves that golden backdoor keys are a terrible idea” draw on words in an advisory to say that this is all about golden keys and secure boot. This post is not intended to attack anyone; researchers, journalists or Microsoft, but to address a rather inflammatory claim that’s being repeated.

Based on my read of a advisory copy (which I made because I cannot read words on an animated background (yes, I’m a grumpy old man (who uses too many parentheticals (especially when I’m sick)))), this is a nice discovery of an authorization failure.


What they found is:

The “supplemental” policy contains new elements, for the merging conditions. These conditions are (well, at one time) unchecked by bootmgr when loading a legacy policy. And bootmgr of win10 v1511 and earlier certainly doesn’t know about them. To those bootmgrs, it has just loaded in a perfectly valid, signed policy. The “supplemental” policy does NOT contain a DeviceID. And, because they were meant to be merged into a base policy, they don’t contain any BCD rules either, which means that if they are loaded, you can enable testsigning.

That’s a fine discovery and a nice vuln. There are ways Microsoft might have designed this better, I’m going to leave those for another day.

Where the post goes off the rails, in my view, is this:

About the FBI: are you reading this? If you are, then this is a perfect real world example about why your idea of backdooring cryptosystems with a “secure golden key” is very bad! Smarter people than me have been telling this to you for so long, it seems you have your fingers in your ears. You seriously don’t understand still? Microsoft implemented a “secure golden key” system.[1] And the golden keys got released from MS own stupidity.[2] Now, what happens if you tell everyone to make a “secure golden key” system? [3] (Bracketed numbers added – Adam)

So, [1], no they did not. [2] No it didn’t. [3] Even a stopped clock …


You could design a system in which there’s a master key, and accidentally release that key. Based on the advisory, Microsoft has not done that. (I have not talked to anyone at MS about this issue; I might have talked to people about the overall design, but don’t recall having done so.) What this is is an authorization system with a design flaw. As far as I can tell, no keys have been released.

Look, there are excellent reasons to not design a “golden key” system. I talked about them at a fundamental engineering level in my threat modeling book, and posted the excerpt in “Threat Modeling Crypto Back Doors.”

The typical way the phrase “golden key” is used (albiet fuzzily) is that there is a golden key which unlocks communications. That is a bad idea. This is not that, and we as engineers or advocates should not undercut our position on that bad idea by referring to this research as if it really impacts on that “debate.”

“Think Like an Attacker” is an opt-in mistake

I’ve repeatedly spoken out against “think like an attacker.”

Now I’m going to argue from authority. In this long article, “The Obama Doctrine,” the President of the United States says “The degree of tribal division in Libya was greater than our analysts had expected.”

So let’s think about that statement and what it means. First, it means that the multi-billion dollar analytic apparatus of the United States made a mistake, a serious one about which the President cares, because it impacted his foreign policy. Second, that mistake was about how people think. Third, that group of people was a society, and one that has interacted with the United States since, oh, I don’t know, someone wrote words like “From the halls of Montezuma to the shores of Tripoli.” (And dig the Marines, kickin’ it old skool with that video.) Fourth, it was not a group that attempts to practice operational security in any way.

So if we consider that the analytical capability of the US can get that wrong, do you really want to try to think like Anonymous, think like 61398, like 8200? Are you going to do this perfectly, or are there chances to make mistakes? Alternately, do you want to require everyone who threat models to know how attackers think? Understanding how other people think and prioritize requires a great deal of work. There are entire fields, like anthropology and sociology dedicated to doing it well. Should we start our defense by reading books on the motivational structures of the PLA or the IDF?

The simple fact is, you don’t need to. You can start from what people are building or deploying. (I wrote a book on how.) The second simple fact is repeating that phrase upsets people. When I first joined Microsoft, I used that phrase. One day, a developer grabbed me after a meeting, and politely told me that he didn’t understand it. Oh, wait, this was Microsoft in 2006. He told me I was a fucking idiot and I should give useful advice. After a bit more conversation, he also told me that he had no idea how the fuck an attacker thought, and if I thought he had time to read a book to learn about it, I could write the goddamned features customers pay for while he read.

Every time someone tells me to think like an attacker, I think about that conversation. I appreciate the honesty that the fellow showed, if not his manner. But (as Dave Weinstein pointed out) “A generalized form of this would be ‘Stop giving developers completely un-actionable “guidance”.’ Now, Dave and I worked together at Microsoft, so maybe there’s a similar experience in his past.

Now, this does not mean that we don’t need to pay attention to what real attackers do. It means that we don’t need to walk a mile in their shoes to defend effectively against it.

Previously, “Think Like An Attacker?,” “The Discipline of “think like an attacker”,” and “Think Like An Attacker? Flip that advice!.” [Edited, also previously, at the New School blog: “Modeling Attackers and Their Motives.”]

Towards a model of web browser security

One of the values of models is they can help us engage in areas where otherwise the detail is overwhelming. For example, C is a model of how a CPU works that allows engineers to defer certain details to the compiler, rather than writing in assembler. It empowers software developers to write for many CPU architectures at once. Many security flaws happen in areas the models simplify. For example, what if the stack grew away from the stack pointer, rather than towards it? The layout of the stack is a detail that is modeled away.

Information security is a broad industry, requiring and rewarding specialization. We often see intense specialization, which can naturally result in building expertise in silos, and tribal separation of knowledge create more gaps. At the same time, there is a stereotype that generalists end up focused on policy, or “risk management” where a lack of technical depth can hide. (That’s not to say that all risk managers are generalists or that there’s not real technical depth to some of them.)


If we want to enable more security generalists, and we want those generalists to remain grounded, we need to make it easier to learn about new areas. Part of that is good models, part of that is good exercises that appropriately balance challenge to skill level, part of that is the availability of mentoring, and I’m sure there are other parts I’m missing.

I enjoyed many things about Michael Zalewski’s book “The Tangled Web.” One thing I wanted was a better way to keep track of who attacks whom, to help me contextualize and remember the attacks. But such a model is not trivial to create. This morning, motivated by a conversation between Trey Ford and Chris Rohlf, I decided to take a stab at drafting a model for thinking about where the trust boundaries exist.

The words which open my threat modeling book are “all models are wrong, some models are useful.” I would appreciate feedback on this model. What’s missing, and why does it matter? What attacks require showing a new element in this software model?

Browser security
[Update 1 — please leave comments here, not on Twitter]

  1. Fabio Cerullo suggests the layout engine. It’s not clear to me what additional threats can be seen if you add this explicitly, perhaps because I’m not an expert.
  2. Fernando Montenegro asks about network services such as DNS, which I’m adding and also about shared trust (CA Certs), which overlap with a question about supply chain from Mayer Sharma.
  3. Chris Rohlf points out the “web browser protection profile.

I could be convinced otherwise, but think that the supply chain is best addressed by a separate model. Having a secure installation and update mechanism is an important mitigation of many types of bugs, but this model is for thinking about the boundaries between the components.

In reviewing the protection profile, it mentions the following threats:

Threat Comment
Malicious updates Out of scope (supply chain)
Malicious/flawed add on Out of scope (supply chain)
Network eavesdropping/attack Not showing all the data flows for simplicity (is this the right call?)
Data access Local storage is shown

Also, the protection profile is 88 pages long, and hard to engage with. While it provides far more detail and allows me to cross-check the software model, it doesn’t help me think about interactions between components.

End update 1]

Think Like An Attacker? Flip that advice!

For many years, I have been saying that “think like an attacker” is bad advice for most people. For example:

Here’s what’s wrong with think like an attacker: most people have no clue how to do it. They don’t know what matters to an attacker. They don’t know how an attacker spends their day. They don’t know how an attacker approaches a problem. Telling people to think like an attacker isn’t prescriptive or clear.

And I’ve been challenging people to think like a professional chef to help them understand why it’s not useful advice. But now, I’ve been one-upped, and, depending on audience, I have a new line to use.

Last week, on Veracode’s blog, Pete Chestna provides the perfect flip of “think like an attacker” to re-frame problems for security people. It’s “think like a developer.” If you, oh great security guru, cannot think like a developer, for heavens sake, stop asking developers to think like attackers.

Seattle event: Ada’s Books

Shostack threat modeling Adas

For Star Wars day, I’m happy to share this event poster for my talk at Ada’s Books in Seattle
Technical Presentation: Adam Shostack shares Threat Modeling Lessons with Star Wars.

This will be a less technical talk with plenty of discussion and interactivity, drawing on some of the content from “Security Lessons from Star Wars,” adapted for a more general audience.

The Discipline of “think like an attacker”

John Kelsey had some great things to say a comment on “Think Like An Attacker.” I’ve excerpted some key bits to respond to them here.

Perhaps the most important is to get the designer to stop looking for reasons attacks are impossible, and start looking for reasons they’re possible. That’s a pattern I’ve seen over and over again–smart people who really know their system also usually like their system, and want it to be secure. And so they spend a lot of time thinking about why their system is secure. “Nobody could steal our PIN because we encrypt it with triple-DES.”

So this is a great goal. I have two questions: first, is it reasonable? How many people can really step outside their design and regard it with a new perspective? How many people can then analyze the security of a system they’ve designed? (Is there a formal name for this problem? I call it ‘creator-blindness.’) I’m not sure exhorting people to think like an attacker helps. This problem isn’t unique to security, which brings me to my second question: is it effective? I was once taught to read my writing aloud as a way of finding mistakes. I teach people to diagram their system and then use a system we call “STRIDE per element” to help people look at it. By giving people a structure for analysis, we help them step outside of that creator frame.

A second goal of that “think like an attacker” exhortation is to get people to realize that, in order to know whether their system is secure, they need to learn something about what tools and resources an attacker is likely to have.

So, for a moment, let’s assume that this is a reasonable goal, and one we can expect every developer who hears the phrase to go pursue. Where do they go? How much time should they devote to it? Again, I’m not talking about the use of the phrase within the security engineering community, but in software engineering more generally. Secondly (again), there’s the question of “is this the most effective way to push people?”

Third, there’s a mindset of being an attacker. I don’t know how to teach that. It’s not just about intelligence–I’ve worked with stunningly brilliant people who don’t seem to have that mindset, and with people who are much less brilliant in that brute-force impressive brain sense, but who just seem to have the right kind of mind to break stuff.

Well, that I can’t argue with. All I’ll say is that we’ve been exhorting people to think like attackers for years, and it hasn’t helped.

I believe that security analysis is a skill which can be taught. The best have both talent and have worked to develop that talent. I hope and expect that we can figure out how to do so. Figuring that out will involve figuring out what pedagogic approaches have failed, so we can set them aside, and make room for experimentation, chaos, and — we hope — actual improvements. I believe that, when asked of non-security experts, the ‘think like an attacker’ is on that list of things we should set aside.

Finally, a side note on the title. If you’re indisciplined, feel free to skip to about 3:10.

Think Like An Attacker?

One of the problems with being quoted in the press is that even your mom writes to you with questions like “And what’s wrong with “think like an attacker?” I think it’s good advice!”

Thanks for the confidence, mom!

Here’s what’s wrong with think like an attacker: most people have no clue how to do it. They don’t know what matters to an attacker. They don’t know how an attacker spends their day. They don’t know how an attacker approaches a problem. Telling people to think like an attacker isn’t prescriptive or clear. Some smart folks like Yoshi Kohno are trying to teach it. (I haven’t seen a report on how it’s gone.)

Even if Yoshi is succeeding, it’s hard to teach a way of thinking. It takes a quarter or more at a university. I’m not claiming that ‘think like an attacker’ isn’t teachable, but I will claim that most people don’t know how. What’s worse, the way we say it, we sometimes imply that you should be embarrassed if you can’t think like an attacker.

Lately, I’ve been challenging people to think like a professional chef. Most people have no idea how a chef spends their days, or how they approach a problem. They have no idea how to plan a menu, or how to cook a hundred or more dinners in an hour.

We need to give advice that can be followed. We need to teach people how to think about security. Repeating the “think like an attacker” mantra may be useful to a small class of well-oriented experts. For everyone else, it’s like saying “just ride the bike!” rather than teaching them step-by-step. We can and should do better at understanding people’s capabilities, giving them advice to match, and training and education to improve.

Understanding people’s capabilities, giving them advice to match and helping them improve might not be a bad description of all the announcements we made yesterday.

In particular, the new threat modeling process is built on something we expect an engineer will know: their software design. It’s a better starting point than “think like a civil engineer.”

[Update: See also my follow-up post, “The Discipline of ‘think like an attacker’.”]