Threat Modeling and Risk Assessment

Yesterday, I got into a bit of a back and forth with Wendy Nather on threat modeling and the role of risk management, and I wanted to respond more fully.

So first, what was said:

(Wendy) As much as I love Elevation of Privilege, I don’t think any threat modeling is complete without considering probability too.
(me) Thanks! I’m not advocating against risk, but asking when. Do you evaluate bugs 2x? Once in threat model & once in bug triage?
(Wendy) Yes, because I see TM as being important in design, when the bugs haven’t been written in yet. :-)

I think Wendy and I are in agreement that threat modeling should happen early, and that probability is important. My issue is that I think issues discovered by threat modeling are, in reality, dealt with by only a few of Gunnar’s top 5 influencers.

I think there are two good reasons to consider threat modeling as an activity that produces a bug list, rather than a prioritized list. First is that bugs are a great exit point for the activity, and second, bugs are going to get triaged again anyway.

First, bugs are a great end point. An important part of my perspective on threat modeling is that it works best when there’s a clear entry and exit point, that is, when developers know when the threat modeling activity is done. (Window Snyder, who knows a thing or two about threat modeling, raised this as the first thing that needed fixing when I took my job at Microsoft to improve threat modeling.) Developers are familiar with bugs. If you end a strange activity, such as threat modeling, with a familiar one, such as filing bugs, developers feel empowered to take a next step. They know what they need to do next.

And that’s my second point: developers and development organizations triage bugs. Any good development organization has a way to deal with bugs. The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.

So if you expect that bugs will work better then you’re left with the important question that Wendy is raising: when do you consider probability? That’s going to happen in bug triage anyway, so why bother including it in threat modeling? You might prune the list and avoid entering silly bugs. That’s a win. But if you capture your risk assessment process and expertise within threat modeling, then what happens in bug triage? Will the security expert be in the room? Do you have a process for comparing security priority to other priorities? (At Microsoft, we use security bug bars for this, and a sample is here.)

My concern, and the reason I got into a back and forth, is I suspect that putting risk assessment into threat modeling keeps organizations from ensuring that expertise is in bug triage, and that’s risky.

(As usual, these opinions are mine, and may differ from those of my employer.)

[Updated to correct editing issues.]

Emergent Map: Streets of the US

This is really cool. All Streets is a map of the United States made of nothing but roads. A surprisingly accurate map of the country emerges from the chaos of our roads:

Allstreets poster

All Streets consists of 240 million individual road segments. No other features — no outlines, cities, or types of terrain — are marked, yet canyons and mountains emerge as the roads course around them, and sparser webs of road mark less populated areas. More details can be found here, with additional discussion of the previous version here.

In the discussion page, “Fry” writes:

The result is a map made of 240 million segments of road. It’s very difficult to say exactly how many individual streets are involved — since a winding road might consist of dozens or even hundreds of segments — but I’m sure there’s someone deep inside the Census Bureau who knows the exact number.

Which raises a fascinating question: is there a Platonic definition of “a road”? Is the question answerable in the sort of concrete way that I can say “there are 2 pens in my hand”? We tend to believe that things are countable, but as you try to count them in larger scales, the question of what is a discrete thing grows in importance. We see this when map software tells us to “continue on Foo Street.” Most drivers don’t care about such instructions; the road is the same road, insofar as you can drive in a straight line and be on what seems the same “stretch of pavement.” All that differs is the signs (if there are signs). There’s a story that when Bostonians named Washington Street after our first President, they changed the names of all the streets as they cross Washington Street, to draw attention to the great man. Are those different streets? They are likely different segments, but I think that for someone to know the number of streets in the US requires not an ontological analysis of the nature of street, but rather a purpose-driven one. Who needs to know how many individual streets are in the US? What would they do with that knowledge? Will they count gravel roads? What about new roads, under construction, or roads in the process of being torn up? This weekend of “carmageddeon” closing of 405 in LA, does 405 count as a road?

Only with these questions answered could someone answer the question of “how many streets are there?” People often steam-roller over such issues to get to answers when they need them, and that may be ok, depending on what details are flattened. Me, I’ll stick with “a great many,” since it is accurate enough for all my purposes.

So the takeaway for you? Well, there’s two. First, even with the seemingly most concrete of questions, definitions matter a lot. When someone gives you big numbers and the influence behavior, be sure to understand what they measured and how, and what decisions they made along the way. In information security, a great many people announce seemingly precise and often scary-sounding numbers that, on investigation, mean far different things than they seem to. (Or, more often, far less.)

And second, despite what I wrote above, it’s not the whole country that emerges. It’s the contiguous 48. Again, watch those definitions, especially for what’s not there.

Previously on Emergent Chaos: Steve Coast’s “Map of London” and “Map of Where Tourists Take Pictures.”

The 1st Software And Usable Security Aligned for Good Engineering (SAUSAGE) Workshop

National Institute of Standards and Technology
Gaithersburg, MD USA
April 5-6, 2011

Call for Participation

The field of usable security has gained significant traction in recent years, evidenced by the annual presentation of usability papers at the top security conferences, and security papers at the top human-computer interaction (HCI) conferences. Evidence is growing that significant security vulnerabilities are often caused by security designers’ failure to account for human factors. Despite growing attention to the issue, these problems are likely to continue until the underlying development processes address usable security.

See for more details.

“Towards Better Usability, Security and Privacy of Information Technology”

Towards Better Usability, Security and Privacy of Information Technology” is a great survey of the state of usable security and privacy:

Usability has emerged as a significant issue in ensuring the security and privacy of computer systems. More-usable security can help avoid the inadvertent (or even deliberate) undermining of security by users. Indeed, without sufficient usability to accomplish tasks efficiently and with less effort, users will often tend to bypass security features. A small but growing community of researchers, with roots in such fields as human-computer interaction, psychology, and computer security, has been conducting research in this area.

Regardless of how familiar you are with usable security, this report is a worthwhile read.

Dear AT&T

You never cease to amaze me with your specialness. You’ve defined a way to send MMS on a network you own, with message content you control, and there’s no way to see the full message:


In particular, I can’t see the password that I need to see the message.

SOUPS Keynote & Slides

This week, the annual Symposium on Usable Privacy and Security (SOUPS) is being held on the Microsoft campus. I delivered a keynote, entitled “Engineers Are People Too:”

In “Engineers Are People, Too” Adam Shostack will address an often invisible link in the chain between research on usable security and privacy and delivering that usability: the engineer. All too often, engineers are assumed to have infinite time and skills for usability testing and iteration. They have time to read papers, adapt research ideas to the specifics of their product, and still ship cool new features. This talk will bring together lessons from enabling Microsoft’s thousands of engineers to threat modeling effectively, share some new approaches to engineering security usability, and propose new directions for research.

A fair number of people have asked for the slides, and they’re here: Engineers Are People Too.

It’s Hard to Nudge

There’s a notion that government can ‘nudge’ people to do the right thing. Big examples include letting people opt-out of organ donorship, rather than opting in (rates of organ donorship go from 10-20% to 80-90%, which is pretty clearly a better thing than putting those organs in the ground or crematoria). Another classic example was participation in 401k retirement accounts, but somehow after the market meltdown, that’s getting less press.

A smaller example is how telling people they’re using more power than others, their power consumption declined. Awesomeness, right? Conservation is the easiest, freest power you can get. Remember that a 150 watt lightbulb consumes twice as much power as your laptop. And most of that goes to waste heat, but I digress. Let’s go back to that nudge study, described in this Slate article:

In a study evaluating the program’s effectiveness, Opower researchers compared power use before and after the HERs began arriving, and further compared this change with a group of control households that never received the reports. On average, the HER households reduced their consumption in the months that followed by a little less than 2 percent. Not bad, but probably not enough to save the planet.

and also:

One problem with this approach is that we all define “better” differently, as a new study emphasizes. UCLA economists Dora Costa and Matthew Kahn analyzed the impact of an energy-conservation program in California that informed households about how their energy use compared with that of their neighbors. While the program succeeded in encouraging Democrats and environmentalists to lower their consumption, Republicans had the opposite reaction. When told of their relative thrift, they started cranking up the thermostat and leaving the lights on more often. … One explanation is that many conservatives don’t believe that burning energy harms the planet, so when they learn that they’re better than average, they become less vigilant about turning the lights off. That is, they’re simply moving closer to what they now know is the norm.

People are complex. It’s hard to know what matters to people, and it’s hard to know what additional information will do to a market. As Hayek pointed out, this is why central planning fails. The planners can’t know all.

And when we start nudging people, lots more chaos will emerge. Planners don’t become better by giving people opt-outs from their planning. And while nudging is better than authoritarianism, it’s still worse than a government which does only what it needs to do.

In the case of energy consumption, a market is emerging to help people see what drives their energy consumption and environmental impact. Better to let a thousand startups bloom, and let the creativity of engineers and those who care deeply help people drive down their electricity use. Everyone else will pay for their long-burning lights, and if electricity is fairly priced, then that’s their choice.

The paper is at “Energy Conservation “Nudges” and Environmentalist Ideology: Evidence from a Randomized Residential Electricity Field Experiment,” National Bureau of Economic Research.

The Liquids ban is a worse idea than you thought

According to new research at Duke University, identifying an easy-to-spot prohibited item such as a water bottle may hinder the discovery of other, harder-to-spot items in the same scan.

Missing items in a complex visual search is not a new idea: in the medical field, it has been known since the 1960s that radiologists tend to miss a second abnormality on an X-ray if they’ve found one already. The concept — dubbed “satisfaction of search” — is that radiologists would find the first target, think they were finished, and move on to the next patient’s X-ray.

Does the principle apply to non-medical areas? That’s what Stephen Mitroff, an assistant professor of psychology & neuroscience at Duke, and his colleagues set out to examine shortly after 2006, when the U.S. Transportation Security Administration banned liquids and gels from all flights, drastically changing airport luggage screens.

“The liquids rule has introduced a whole lot of easy-to-spot targets,” Mitroff said.

Duke University press release, Mitroff’s home page, full paper.