Threat Modeling and Risk Assessment

Yesterday, I got into a bit of a back and forth with Wendy Nather on threat modeling and the role of risk management, and I wanted to respond more fully.

So first, what was said:

(Wendy) As much as I love Elevation of Privilege, I don’t think any threat modeling is complete without considering probability too.
(me) Thanks! I’m not advocating against risk, but asking when. Do you evaluate bugs 2x? Once in threat model & once in bug triage?
(Wendy) Yes, because I see TM as being important in design, when the bugs haven’t been written in yet. :-)

I think Wendy and I are in agreement that threat modeling should happen early, and that probability is important. My issue is that I think issues discovered by threat modeling are, in reality, dealt with by only a few of Gunnar’s top 5 influencers.

I think there are two good reasons to consider threat modeling as an activity that produces a bug list, rather than a prioritized list. First is that bugs are a great exit point for the activity, and second, bugs are going to get triaged again anyway.

First, bugs are a great end point. An important part of my perspective on threat modeling is that it works best when there’s a clear entry and exit point, that is, when developers know when the threat modeling activity is done. (Window Snyder, who knows a thing or two about threat modeling, raised this as the first thing that needed fixing when I took my job at Microsoft to improve threat modeling.) Developers are familiar with bugs. If you end a strange activity, such as threat modeling, with a familiar one, such as filing bugs, developers feel empowered to take a next step. They know what they need to do next.

And that’s my second point: developers and development organizations triage bugs. Any good development organization has a way to deal with bugs. The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.

So if you expect that bugs will work better then you’re left with the important question that Wendy is raising: when do you consider probability? That’s going to happen in bug triage anyway, so why bother including it in threat modeling? You might prune the list and avoid entering silly bugs. That’s a win. But if you capture your risk assessment process and expertise within threat modeling, then what happens in bug triage? Will the security expert be in the room? Do you have a process for comparing security priority to other priorities? (At Microsoft, we use security bug bars for this, and a sample is here.)

My concern, and the reason I got into a back and forth, is I suspect that putting risk assessment into threat modeling keeps organizations from ensuring that expertise is in bug triage, and that’s risky.

(As usual, these opinions are mine, and may differ from those of my employer.)

[Updated to correct editing issues.]

Elevation of Privilege (Web Edition) Question

Someone wrote to me to ask:

A few cards are not straightforward to apply to a webapp situation (some seem assume a proprietary client) – do you recommend discarding them or perhaps you thought of a way to rephrase them somehow?

For example:

“An attacker can make a client unavailable or unusable but the problem goes away when the attacker stops”

I don’t have a great answer, but I’m thinking someone else might have taken it on.

For Denial of Service attacks in the Microsoft SDL bug bar, we roughly to break things down to a matrix of (server, client, persistent/temporary). That doesn’t seem right for web apps. Is there a better approach, and perhaps even one that can translate into some good threat cards?

The 1st Software And Usable Security Aligned for Good Engineering (SAUSAGE) Workshop

National Institute of Standards and Technology
Gaithersburg, MD USA
April 5-6, 2011

Call for Participation

The field of usable security has gained significant traction in recent years, evidenced by the annual presentation of usability papers at the top security conferences, and security papers at the top human-computer interaction (HCI) conferences. Evidence is growing that significant security vulnerabilities are often caused by security designers’ failure to account for human factors. Despite growing attention to the issue, these problems are likely to continue until the underlying development processes address usable security.

See http://www.thei3p.org/events/sausage2011.html for more details.

6502 Visual Simulator

In 6502 visual simulator, Bunnie Huang writes:

It makes my head spin to think that the CPU from the first real computer I used, the Apple II, is now simulateable at the mask level as a browser plug-in. Nothing to install, and it’s Open-licensed. How far we have come…a little more than a decade ago, completing a project like this would have resulted in a couple PhDs being awarded, or regarded as trade secret by some big EDA vendor. This is just unreal…but very cool!’

Visual6502.org, via Justin Mason

Use crypto. Not too confusing. Mostly asymmetric.

A little ways back, Gunnar Peterson said “passwords are like hamburgers, taste great but kill us in long run wean off password now or colonoscopy later.” I responded: “Use crypto. Not too confusing. Mostly asymmetric.” I’d like to expand on that a little. Not quite so much as Michael Pollan, but a little.

The first sentence, “use crypto” is a simple one. It means more security requires getting away from sending strings as a way to authenticate people at a distance. This applies (obviously) to passwords, but also to SSNs, mother’s “maiden” names, your first car, and will apply to biometrics. Sending a string which represents an image of a fingerprint is no harder to fake than sending a password. Stronger authenticators will need to involve an algorithm and a key.

The second, “not too confusing” is a little more subtle, because there are layers of confusing. There’s developer confusion as the system is implemented, adding pieces, like captchas, without a threat model. There’s user confusion as to what program popped that demand for credentials, what site they’re connecting to, or what password they’re supposed to use. There’s also confusion about what makes a good password when one site demands no fewer than 10 characters and another insists on no more. But regardless, it’s essential that a a strong authentication system be understood by at least 99% of its users, and that the authentication is either mutual or resistant to replay, reflection and man-in-the-middle attacks. In this, “TOFU” is better than PKI. I prefer to call TOFO “persistence” or “key persistence” This is in keeping with Pollan’s belief that things with names are better than things with acronyms.

Finally, “mostly asymmetric.” There are three main building blocks in crypto. They are one way functions, symmetric and asymmetric ciphers. Asymmetric systems are those with two mathematically related keys, only one of which is kept secret. These are better because forgery attacks are harder; because only one party holds a given key. (Systems that use one way functions can also deliver this property.) There are a few reasons to avoid asymmetric ciphers, mostly having to do with the compute capabilities of really small devices like a smartcard or very power limited devices like pacemakers.

So there you have it: Use crypto. Not too confusing. Mostly asymmetric.

Hacker Hide and Seek

Core Security Ariel Waissbein has been building security games for a while now. He was They were kind enough to send a copy of his their “Exploit” game after I released Elevation of Privilege. [Update: I had confused Ariel Futoransky and Ariel Waissbein, because Waissbein wrote the blog post. Sorry!] At Defcon, he and his colleagues will be running a more capture-the-flag sort of game, titled “Hide and seek the backdoor:”

For starters, a backdoor is said to be a piece of code intentionally added to a program to grant remote control of the program — or the host that runs it – to its author, that at the same time remains difficult to detect by anybody else.

But this last aspect of the definition actually limits its usefulness, as it implies that the validity of the backdoor’s existence is contingent upon the victim’s failure to detect it. It does not provide any clue at all into how to create or detect a backdoor successfully.

A few years ago, the CoreTex team did an internal experiment at Core and designed the Backdoor Hiding Game, which mimics the old game Dictionary. In this new game, the game master provides a description of the functionalities of a program, together with the setting where it runs, and the players must then develop programs that fulfill these functionalities and have a backdoor. The game master then mixes all these programs with one that he developed and has no backdoors, and gives these to the players. Then, the players must audit all the programs and pick the benign one.

First, I think this is great, and I look forward to seeing it. I do have some questions. What elements of the game can we evaluate and how? A general question we can ask is “Is the game for fun or to advance the state of the art?” (Both are ok and sometimes it’s unclear until knowledge emerges from the chaos of experimentation.) His blog states “We discovered many new hiding techniques,” which is awesome. Games that are fun and advance the state of the art are very hard to create. It’s a seriously cool achievement.

My next question is, how close is the game to the reality of secure software development? How can we transfer knowledge from one to the other? The rules seem to drive backdoors into most code (assuming they all work, (n-1)/n). That’s unlike reality, with a much higher incidence of backdoors than exist in the wild. I’m assuming that the code will all be custom, and thus short enough to create and audit in a game, which also leads to a higher concentration of backdoors per line of code. That different concentration will reward different techniques from those that could scale to a million lines of code.

More generally, do we know how to evaluate hiding techniques? Do hackers playing a game create the same sort of backdoors as disgruntled employees or industrial spies? Because of this contest and the Underhanded C Contests, we have two corpuses of backdoored code. However, I’m not aware of any corpus of deployed backdoor code which we could compare.

So anyway, I look forward to seeing this game at Defcon, and in the future, more serious games for information security.

Previously, I’ve blogged about the Underhanded C contest here and here

SOUPS Keynote & Slides

This week, the annual Symposium on Usable Privacy and Security (SOUPS) is being held on the Microsoft campus. I delivered a keynote, entitled “Engineers Are People Too:”

In “Engineers Are People, Too” Adam Shostack will address an often invisible link in the chain between research on usable security and privacy and delivering that usability: the engineer. All too often, engineers are assumed to have infinite time and skills for usability testing and iteration. They have time to read papers, adapt research ideas to the specifics of their product, and still ship cool new features. This talk will bring together lessons from enabling Microsoft’s thousands of engineers to threat modeling effectively, share some new approaches to engineering security usability, and propose new directions for research.

A fair number of people have asked for the slides, and they’re here: Engineers Are People Too.

Lady Ada books opening May 11

Ada’s Technical Books is Seattle’s only technical book store located in the Capitol Hill neighborhood of Seattle, Washington. Ada’s specifically carries new, used, & rare books on Computers, Electronics, Physics, Math, and Science as well as hand-picked inspirational and leisure reading, puzzles, brain teasers, and gadgets geared toward the technically minded customer.

From the store’s blog, “Grand Opening: June 11th

I’ve been helping David and Danielle a little with book selection because they’re good folks and I love great bookstores. I encourage Seattle readers to stop by.

Facebook, Here’s Looking at You Kid

The last week and a bit has been bad to Facebook. It’s hard to recall what it was that triggered the avalanche of stories. Maybe it was the flower diagram we mentioned. Maybe it was the New York Times interactive graphic of just how complex it is to set privacy settings on Facebook:

facebook-privacy.jpg

Maybe it was Zuckerberg calling people who trust him “dumb fucks,” or the irony of him telling a journalist that “Having two identities for yourself is an example of a lack of integrity.” Or maybe it was the irony that telling people you believe in privacy while calling them dumb fucks is, really, a better example of a lack of integrity than having two identities.

Maybe it was the Facebook search (try ‘my dui’), or Facebook: The privatization of our Privates and Life in the Company Town. Maybe it was getting on CNN that helped propel it.

It all generated some great discussion like danah boyd’s Facebook and “radical transparency” (a rant). It also generated some not so great ideas like “Poisoning The Well – A Response To Privacy Concerns… ” and “How to protect your privacy from Facebook.” These are differently wrong, and I’ll address them one at a time. First, poisoning the well. I’m a big fan of poisoning the wells of mandatory data collectors. But the goal of Facebook is to connect and share. If you have to poison the data you’re trying to share with your friends, the service is fundamentally broken. Similarly, if you’re so scared of their implicit data collection that you use a different web browser to visit their site, and you only post information you’re willing to see made public, you might as well use more appropriate and specialized sites like Flickr, LinkedIn, Okcupid, Twitter or XBox Live. (I think that covers all the main ways people use Facebook.)

But Facebook’s problems aren’t unique. We’ve heard them before, with sites like Friendster, MySpace, Tribe and Orkut. All followed the same curve of rise, pollution and fall that Facebook is going to follow. It’s inevitable and inherent in the attempt to create a centralized technical implementation of all the myriad ways in which human beings communicate.

Play it Sam…once more, for old time’s sake

I think there are at least four key traps for every single-operator, all-purpose social network.

  1. Friend requests The first big problem is that as everyone you’ve ever had a beer with, along with that kid who beat you up in 3rd grade sends you a friend request, the joy of ‘having lots of friends’ is replaced with the burden of managing lots of ‘friends.’ And as the network grows, so does the burden. Do you really know what that pyronut from college chemistry is up to? Do you want to have to judge the meaning of a conversation in light of today’s paranoia? This leads us to the next problem:
  2. Metaphors Facebook uses two metaphors for relationships: friend and network. Both are now disconnected from their normal English meanings. An f-friend is not the same as a real friend. You might invite a bunch of friends over for drinks. Would you send the same invite to your f-friends list? Similarly, if I were to join Facebook today, I could join a Microsoft network, because I work there (although I’m not speaking for them here). Now, in the time that Facebook has been open to the world, lots of people have gained and lost Microsoft email addresses. Some have been full time employees. Some have been contractors of various types. Some have been fired. Is there a process for managing that? Maybe, we have a large HR department, but I have no idea. One key point is that membership in an f-network is not the same as membership in a real network. The meaning of the words evolve through practice and use. But there’s another issue with metaphors as made concrete through the technical decisions of Facebook programmers: there aren’t enough. I think that there’s also now “fans” available as an official metaphor, but what about salesguy-you-met-at-a-conference-who-won’t-stop bugging-you? The technical options don’t match the nuance with which social beings handle these sorts of questions, and even if they do, telling a computer all that is too much of a bother. (See the chart above for an attempt to make it do something related.)
  3. Privacy means many things Privacy means different things to different people. Even the same person at different times wants very different things, and the costs of figuring out what they will want in some unforeseen future is too high. So privacy issues will keep acting as a thorn in the side for social network systems, and worse for centralized ones.
  4. Different goals Customers & the business have different desires from the system. Customers want fast, free, comprehensive, private, and easy to use. They don’t want to worry about losing their jobs or not getting one. They don’t want to worry about stalkers. They don’t want their sweetie to look over their shoulder and see an ad for diamond rings after talking to their friends about engagement. But hiring managers want to see that embarrassing thing you just said. (Hello, revenue model, although Facebook has not, as far as I know, tapped this one yet.) Stalkers are heavy users who you can show ads to. Advertisers want to show those diamond ring ads. Another example of this is the demand to use your real name. Facebook’s demand that you use your real name is in contrast to 4 of the 5 alternatives up there. Nicknames, psuedonyms, handles, tags are all common all over the web, because, in fact, separating our identities is a normal activity. This is an idea that I talk about frequently. But it’s easier to monetize you if Facebook has your real name.

So I’m shocked, shocked to discover that Facebook is screwed up. A lot of other shocked people are donating to Diaspora ($172,000 of their $10,000 has been pledged. There’s interesting game theory about commitment, delivery on those pledges, and should they just raise a professional round of VC, but this post is already long.) There’s also Appleseed: A Privacy-Centric Facebook Slayer With Working Code.

Now, before I close, I do want to say that I see some of this as self-inflicted, but the underlying arc doesn’t rely on Zuckerberg. It’s not about the folks who work for Zuckerberg, who, for all I know are the smartest, nicest, best looking folks anywhere. It’s about the fundamental model of centralized, all-purpose social networks being broken.

To sum it all up, I’m gonna hand the microphone to Rick:

If you don’t get off that site, you’ll regret it. Maybe not today, maybe not tomorrow, but soon and for the rest of your life. Last night we said a great many things. You said I was to do the thinking for both of us. Well, I’ve done a lot of it since then, and it all adds up to one thing: you’re getting off that Facebook. Now, you’ve got to listen to me! You have any idea what you’d have to look forward to if you stayed here? Nine chances out of ten, we’d both wind up with our privacy in ruins. Isn’t that true, Louie?

Capt. Renault: I’m afraid that Major Zuckerberg will insist.