Diagrams in Threat Modeling

When I think about how to threat model well, one of the elements that is most important is how much people need to keep in their heads, the cognitive load if you will.

In reading Charlie Stross’s blog post, “Writer, Interrupted” this paragraph really jumped out at me:

One thing that coding and writing fiction have in common is that both tasks require the participant to hold huge amounts of information in their head, in working memory. In the case of the programmer, they may be tracing a variable or function call through the context of a project distributed across many source files, and simultaneously maintaining awareness of whatever complex APIs the object of their attention is interacting with. In the case of the author, they may be holding a substantial chunk of the plot of a novel (or worse, an entire series) in their head, along with a model of the mental state of the character they’re focussing on, and a list of secondary protagonists, while attempting to ensure that the individual sentence they’re currently crafting is consistent with the rest of the body of work.

One of the reasons that I’m fond of diagrams is that they allow the threat modelers to migrate information out of their heads into a diagram, making room for thinking about threats.

Lately, I’ve been thinking a lot about threat modeling tools, including some pretty interesting tools for automated discovery of existing architecture from code. That’s pretty neat, and it dramatically cuts the cost of getting started. Reducing effort, or cost, is inherently good. Sometimes, the reduction in effort is an unalloyed good, that is, any tradeoffs are so dwarfed by benefits as to be unarguable. Sometimes, you lose things that might be worth keeping, either as a hobby like knitting or in the careful chef preparing a fine meal.

I think a lot about where drawing diagrams on a whiteboard falls. It has a cost, and that cost can be high. “Assemble a team of architect, developer, test lead, business analyst, operations and networking” reads one bit of advice. That’s a lot of people for a cross-functional meeting.

That meeting can be a great way to find disconnects in what people conceive of building. And there’s a difference between drawing a diagram and being handed a diagram. I want to draw that out a little bit and ask for your help in understanding the tradeoffs and when they might and might not be appropriate. (Gary McGraw is fond of saying that getting these people in a room and letting them argue is the most important step in “architectural risk analysis.” I think it’s tremendously valuable, and having structures, tools and methods to help them avoid ratholes and path dependency is a big win.)

So what are the advantages and disadvantages of each?


  • Collaboration. Walking to the whiteboard and picking up a marker is far less intrusive than taking someone’s computer, or starting to edit a document in a shared tool.
  • Ease of use. A whiteboard is still easier than just about any other drawing tool.
  • Discovery of different perspective/belief. This is a little subtle. If I’m handed a diagram, I’m less likely to object. An objection may contain a critique of someone else’s work, it may be a conflict. As something is being drawn on a whiteboard, it seems easier to say “what about the debug interface?” (This ties back to Gary McGraw’s point.)
  • Storytelling. It is easier to tell a story standing next to a whiteboard than any tech I’ve used. A large whiteboard diagram is easy to point at. You’re not blocking the projector. You can easily edit as you’re talking.
  • Messy writing/what does that mean? We’ve all been there? Someone writes something in shorthand as a conversation is happening, and either you can’t read it or you can’t understand what was meant. Structured systems encourage writing a few more words, making things more tedious for everyone around.

Software Tools

  • Automatic analysis. Tools like the Microsoft Threat Modeling tool can give you a baseline set of threats to which you add detail. Structure is a tremendous aid to getting things done, and in threat modeling, it helps in answering “what could go wrong?”
  • Authority/decidedness/fixedness. This is the other side of the discovery coin. Sometimes, there are architectural answers, and those answers are reasonably fixed. For example, hardware accesses are mediated by the kernel, and filesystem and network are abstracted there. (More recent kernels offer filesystems in userland, but that change was discussed in detail.) Similarly, I’ve seen large, complex systems with overall architecture diagrams, and a change to these diagrams had to be discussed and approved in advance. If this is the case, then a fixed diagram, printed poster size and affixed to walls, can also be used in threat modeling meetings as a context diagram. No need to re-draw it as a DFD.
  • Photographs of whiteboards are hard to archive and search without further processing.
  • Photographs of whiteboards may imply that ‘this isn’t very important.” If you have a really strong culture of “just barely good enough” than this might not be the case, but if other documents are more structured or cared for, then photos of a whiteboard may carry a message.
  • Threat modeling only late. If you’re going to get architecture from code, then you may not think about it until the code is written. If you weren’t going to threat model anyway, then this is a win, but if there was a reasonable chance you were going to do the architectural analysis while there was a chance to change the architecture, software tools may take that away.

(Of course, there are apps that help you take images from a whiteboard and improve them, for example, Best iOS OCR Scanning Apps, which I’m ignoring for purposes of teasing things out a bit. Operationally, probably worth digging into.)

I’d love your thoughts: are there other advantages or disadvantages of a whiteboard or software?

The Evolution of Apple’s Differential Privacy

Bruce Schneier comments on “Apple’s Differential Privacy:”

So while I applaud Apple for trying to improve privacy within its business models, I would like some more transparency and some more public scrutiny.

Do we know enough about what’s being done? No, and my bet is that Apple doesn’t know precisely what they’ll ship, and aren’t answering deep technical questions so that they don’t mis-speak. I know that when I was at Microsoft, details like that got adjusted as we learned from a bigger pile of real data from real customer use informed things. I saw some really interesting shifts surprisingly late in the dev cycle of various products.

I also want to challenge the way Matthew Green closes: “If Apple is going to collect significant amounts of new data from the devices that we depend on so much, we should really make sure they’re doing it right — rather than cheering them for Using Such Cool Ideas.”

But that is a false dichotomy, and would be silly even if it were not. It’s silly because we can’t be sure if they’re doing it right until after they ship it, and we can see the details. (And perhaps not even then.)

But even more important, the dichotomy is not “are they going to collect substantial data or not?” They are. The value organizations get from being able to observe their users is enormous. As product managers observe what A/B testing in their web properties means to the speed of product improvement, they want to bring that same ability to other platforms. Those that learn fastest will win, for the same reasons that first to market used to win.

Next, are they going to get it right on the first try? No. Almost guaranteed. Software, as we learned a long time ago, has bugs. As I discussed in “The Evolution of Secure Things:”

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

Green (and Schneier) are right to be skeptical, and may even be right to be cynical. We should not lose sight of the fact that Apple is spending rare privacy engineering resources to do better than Microsoft. Near as I can tell, this is an impressive delivery on the commitment to be the company that respects your privacy, and I say that believing that there will be both bugs and design flaws in the implementation. Green has an impressive record of finding and calling Apple (and others) on such, and I’m optimistic he’ll have happy hunting.

In the meantime, we can, and should, cheer Apple for trying.

Sneak peeks at my new startup at RSA


Many executives have been trying to solve the problem of connecting security to the business, and we’re excited about what we’re building to serve this important and unmet need. If you present security with an image like the one above, we may be able to help.

My new startup is getting ready to show our product to friends at RSA. We’re building tools for enterprise leaders to manage their security portfolios. What does that mean? By analogy, if you talk to a financial advisor, they have tools to help you see your total financial picture: assets and debts. They’ll help you break out assets into long term (like a home) or liquid investments (like stocks and bonds) and then further contextualize each as part of your portfolio. There hasn’t been an easy way to model and manage a portfolio of control investments, and we’re building the first.

If you’re interested, we have a few slots remaining for meetings in our suite at RSA! Drop me a line at [first]@[last].org, in a comment or reach out over linkedin.

Kale Caesar

According to the CBC: “McDonald’s kale salad has more calories than a Double Big Mac


In a quest to reinvent its image, McDonald’s is on a health kick. But some of its nutrient-enhanced meals are actually comparable to junk food, say some health experts.

One of new kale salads has more calories, fat and sodium than a Double Big Mac.

Apparently, McDonalds is there not to braise kale, but to bury it in cheese and mayonnaise. And while that’s likely mighty tasty, it’s not healthy.

At a short-term level, this looks like good product management. Execs want salads on the menu? Someone’s being measured on sales of new salads, and loading them up with tasty, tasty fats. It’s effective at associating a desirable property of salad with the product.

Longer term, not so much. It breeds cynicism. It undercuts the ability of McDonalds to ever change its image, or to convince people that its food might be a healthy choice.

Open Letters to Security Vendors

John Masserini has a set of “open letters to security vendors” on Security Current.

Everyone involved in product or sales at a security startup should read them. John provides insight into what it’s like to be pitched by too many startups, and provides a level of transparency that’s sadly hard to find. Personally, I learned a great deal about what happens when you’re pitched while I was at a large company, and I can vouch for the realities he puts forth. The sooner you understand those realities and incorporate them into your thinking, the more successful we’ll all be.

After meeting with dozens of startups at Black Hat a few weeks ago, I’ve realized that the vast majority of the leaders of these new companies struggle to articulate the value their solutions bring to the enterprise.

Why does John’s advice make us all more successful? Because each organization that follows it moves towards a more efficient state, for themselves and for the folks who they’re pitching.

Getting more efficient means you waste less time per prospect. When you focus on qualified leads who care about the problem you’re working on, you get more sales per unit of time. What’s more, by not wasting the time of those who won’t buy, you free up their time for talking to those who might have something to provide them. (One banker I know said “I could hire someone full-time to reject startup pitches.” Think about what that means for your sales cycle for a moment.)

Go read “An Open Letter to Security Vendors” along with part 2 (why sales takes longer) and part 3 (the technology challenges most startups ignore).

The Evolution of Secure Things

One of the most interesting security books I’ve read in a while barely mentions computers or security. The book is Petroski’s The Evolution of Useful Things.

Evolution Of useful Things Book Cover

As the subtitle explains, the book discusses “How Everyday Artifacts – From Forks and Pins to Paper Clips and Zippers – Came to be as They are.”

The chapter on the fork is a fine example of the construction of the book.. The book traces its evolution from a two-tined tool useful for holding meat as it was cut to the 4 tines we have today. Petroski documents the many variants of forks which were created, and how each was created with reference to the perceived failings of previous designs. The first designs were useful for holding meat as you cut it, before transferring it to your mouth with the knife. Later designs were unable to hold peas, extract an oyster, cut pastry, or meet a variety of other goals that diners had. Those goals acted as evolutionary pressures, and drove innovators to create new forms of the fork.

Not speaking of the fork, but rather of newer devices, Petroski writes:

Why designers do not get things right the first time may be more understandable than excusable. Whether electronics designers pay less attention to how their devices will be operated, or whether their familiarity with the electronic guts of their own little monsters hardens them against these monsters’ facial expressions, there is a consensus among consumers and reflective critics like Donald Norman, who has characterized “usable design” as the “next competitive frontier,” that things seldom live up to their promise. Norman states flatly, “Warning labels and large instruction manuals are signs of failures, attempts to patch up problems that should have been avoided by proper design in the first place.” He is correct, of course, but how is it that designers have, almost to a person, been so myopic?

So what does this have to do with security?

(No, it’s not “stick a fork in it, it’s done fer.”)

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

In security, we ask for perfection against an ill-defined and ever-growing list of hard-to-understand properties, such as “double-free safety.”

Computer security is in a process of moving from expressing “security” to expressing more precise goals, and the evolution of useful tools for finding, naming, and discussing vulnerabilities will help us express what we want in secure software.

The various manifestations of failure, as have been articulated in case studies throughout this book, provide the conceptual underpinning for understanding the evolving form of artifacts and the fabric of technology into which they are inextricably woven. It is clearly the perception of failure in existing technology that drives inventors, designers, and engineers to modify what others may find perfectly adequate, or at least usable. What constitutes failure and what improvement is not totally objective, for in the final analysis a considerable list of criteria, ranging from the functional to the aesthetic, from the economic to the moral, can come into play. Nevertheless, each criterion must be judged in a context of failure, which, though perhaps much easier than success to quantify, will always retain an aspect of subjectivity. The spectrum of subjectivity may appear to narrow to a band of objectivity within the confines of disciplinary discussion, but when a diversity of individuals and groups comes together to discuss criteria of success and failure, consensus can be an elusive state.

Even if you’ve previously read it, re-reading it from a infosec perspective is worthwhile. Highly recommended.

[As I was writing this, Ben Hughes wrote a closely related post on the practical importance of tradeoffs, “A Dockery of a Sham.”]

What Good is Threat Intelligence Going to do Against That?

As you may be aware, I’m a fan of using Star Wars for security lessons, such as threat modeling or Saltzer and Schroeder. So I was pretty excited to see Wade Baker post “Luke in the Sky with Diamonds,” talking about threat intelligence, and he gets bonus points for crossover title. And I think it’s important that we see to fixing a hole in their argument.

So…Pardon me for asking, but what good is threat intelligence going to do against that?

In many ways, the diamond that Wade’s built shows a good understanding of the incident. (It may focus overmuch on Jedi Panda, to the practical exclusion of R2-D2, who we all know is the driving force through the movies.) The facts are laid out, they’re organized using the model, and all is well.

Most of my issues boil down to two questions. The first is how could any analysis of the Battle of Yavin fail to mention the crucial role played by Obi Wan Kenobi, and second, what the heck do you do with the data? (And a third, about the Diamond Model itself — how does this model work? Why is a lightsaber a capability, and an X-Wing a bit of infrastructure? Why is The Force counted as a capability, not an adversary to the Dark Side?)

To the first question, that of General Kenobi. As everyone knows, General Kenobi had infiltrated and sabotaged the Death Star that very day. The public breach reports state that “a sophisticated actor” was only able to sabotage a tractor beam controller before being caught, but how do we know that’s all he did? He was on board the station for hours, and could easily have disabled tractor beams that worked in the trenches, or other defenses that have not been revealed. We know that his associate, Yoda, was able to see into the future. We have to assume that they used this ability, and, in using it, created for themselves a set of potential outcomes, only one of which is modeled.

The second question is, okay, we have a model of what went wrong, and what do we do with it? The Death Star has been destroyed, what does all that modeling tell us about the Jedi Panda? About the Fortressa? (Which, I’ll note, is mentioned as infrastructure, but not in the textual analysis.) How do we turn data into action?

Depending on where you stand, it appears that Wade falls into several traps in this post. They are:

  • Adversary modeling and missing something. The analysis misses Ben Kenobi, and it barely touches on the fact that the Rebel Alliance exists. Getting all personal might lead an Imperial Commander to be overly focused on Skywalker, and miss the threat from Lando Calrissian, or other actors, to a second Death Star. Another element which is missed is the relationship between Vader and Skywalker. And while I don’t want to get choked for this, there’s a real issue that the Empire doesn’t handle failure well.
  • Hindsight biases are common — so common that the military has a phenomenon it calls ‘fighting the last war.’ This analysis focuses in on a limited set of actions, the ones which succeeded, but it’s not clear that they’re the ones most worth focusing on.
  • Actionability. This is a real problem for a lot of organizations which get interesting information, but do not yet have the organizational discipline to integrate it into operations effectively.

The issues here are not new. I discussed them in “Modeling Attackers and their Motives,” and I’ll quote myself to close:

Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.

What I don’t know about the Diamond Model is how it does a better job at avoiding the traps and helping those who use it do better than other models. (I’m not saying it’s poor, I’m saying I don’t know and would like to see some empirical work on the subject.)

RSA: Time for some cryptographic dogfood

One of the most effective ways to improve your software is to use it early and often.  This used to be called eating your own dogfood, which is far more evocative than the alternatives. The key is that you use the software you’re building. If it doesn’t taste good to you, it’s probably not customer-ready.  And so this week at RSA, I think more people should be eating the security community’s cryptographic dogfood.

As I evangelize the use of crypto to meet up at RSA, I’ve encountered many problems, such as choice of tool, availability of tool across a set of mobile platforms, cost of entry, etc.  Each of these is predictable, but with dogfooding — forcing myself to ask everyone why they want to use an easily wiretapped protocol — the issues stand out, and the companies that will be successful will start thinking about ways to overcome them.

So this week, as you prep for RSA, spend a few minutes to get some encrypted communications tool. The worst that can happen is you’re no more secure than you were before you read this post.

On Bitcoin

There’s an absolutely fascinating interview with Adam Back: “Let’s Talk Bitcoin Adam Back interview.”

For those of you who don’t know Adam, he created Hashcash, which is at the core of Bitcoin proof of work.

Two elements I’d like to call attention to in particular are:
First, there’s an interesting contrast between Adam’s opinions and Glenn Flieshman’s opinions in “On the Matter of Why Bitcoin Matters.” In particular, Glenn seems to think that transaction dispute should be in the protocol, and Adam thinks it should be layered on in some way. (Near as I can tell, Glenn is a very smart journalist, but he’s not a protocol designer.)

Secondly, Adam discusses the ways in which a really smart fellow, deeply steeped in the underlying technologies, can fail to see how all the elements of Bitcoin happened to combine into a real ecash system. Like Adam, I was focused on properties other systems have and Bitcoin does not, and so was unexcited by it. There’s an interesting lesson in humility there for me.

This is also interesting “How the Bitcoin protocol works.”

Getting Ready for a Launch

I’m getting ready for to announce a new project that I’ve been working on for quite a while.

As I get ready, I was talking to friends in PR and marketing, and they were shocked and appalled that I don’t have a mailing list. It was a little like telling people in security that you don’t fuzz your code.

Now, I don’t know a lot about marketing, but I do know that look which implies table stakes. So I’ve set up a mailing list. I’ve cleverly named it “Adam Shostack’s New Thing.” It’ll be the first place to hear about the new things I’m creating — books, games or anything else.

People who sign up will be the first to hear my news.

[Update: Some people are asking why I don’t just use Twitter or blogs? I plan to, but there are people who’d like more concentrated news in their inbox. Cool. I can help them. And much as I love Twitter, it’s easy for a tweet to be lost, and easy to fall into the trap of retweeting yourself every hour to overcome that. That’s annoying to your followers who see you repeating yourself.]

What Price Privacy, Paying For Apps edition

There’s a new study on what people would pay for privacy in apps. As reported by Techflash:

A study by two University of Colorado Boulder economists, Scott Savage and Donald Waldman, found the average user would pay varying amounts for different kinds of privacy: $4.05 to conceal contact lists, $2.28 to keep their browser history private, $2.12 to eliminate advertising on apps, $1.19 to conceal personal locations, $1.75 to conceal the phone’s ID number and $3.58 to conceal the contents of text messages.

Those numbers seem small, but they’re in the context of app pricing, which is generally a few bucks. If those numbers combine linearly, people being willing to pay up to $10 more for a private version is a very high valuation. (Of course, the numbers will combine in ways that are not strictly rational. Consumers satisfice.

A quick skim of the article leads me to think that they didn’t estimate app maker benefit from these privacy changes. How much does a consumer contact list go for? (And how does that compare to the fines for improperly revealing it?) How much does an app maker make per person whose eyeballs they sell to show ads?

Replacing Flickr?

So Flickr has launched a new redesign, and it’s crowded, jumbled and slow. Now on Flickr with its overlays, its fade-ins and loads, it’s unmoving side and top bars, Flickr’s design takes center stage, elbowing aside the photos that I’m there to see.

So I’m looking for a new community site where the photo I upload is the photo they display without overlays and with enough whitespace that people can consider it as a photograph. I’d like a site where I can talk with other photographers and get feedback, and where they’re happy to let me pay for multiple accounts for the various and separate ways I want to present my work.

500px looks like an interesting possibility, but they seem really heavy on the gamification, showing you “affection”, views, likes, favorites, on every photographer. Also, while their ToS are relatively easy to read, ToS;DR gives them a D.

What else should I be looking at?

A Quintet of Facebook Privacy Stories

It’s common to hear that Facebook use means that privacy is over, or no longer matters. I think that perception is deeply wrong. It’s based in the superficial notion that people making different or perhaps surprising privacy tradeoffs are never aware of what they’re doing, or that they have no regrets.

Some recent stories that I think come together to tell a meta-story of privacy:

  • Steven Levy tweeted: “What surprised me most in my Zuck interview: he says the thing most on rise is ‘sharing with smaller groups.'” (Tweet edited from 140-speak). I think that sharing with smaller groups is a pretty clear expression that privacy matters to Facebook users, and that as Facebook becomes more a part of people’s lives, the way they use it will continue to mature. For example, it turns out:
  • 71% of Facebook Users Engage in ‘Self-Censorship’” did a study of people typing into the Facebook status box, and not hitting post. In part this may be because people are ‘internalizing the policeman’ that Facebook imposes:
  • Facebook’s Online Speech Rules Keep Users On A Tight Leash.” This isn’t directly a privacy story, but one important facet of privacy is our ability to explore unpopular ideas. If our ability to do so in the forum in which people talk to each other is inhibited by private contract and opaque rules, then our ability to explore and grow in the privacy which Facebook affords to conversations is inhibited.
  • Om Malik: “Why Facebook Home bothers me: It destroys any notion of privacy” An interesting perspective, but Facebook users still care about privacy, but will have trouble articulating how or taking action to preserve the values of privacy they care about.

The Psychology of Password Managers

As I think more about the way people are likely to use a password manager, I think there’s real problems with the way master passwords are set up. As I write this, I’m deeply aware that I’m risking going into a space of “it’s logical that” without proper evidence.

Let’s start from the way most people will likely come to a password manager. They’ll be in an exploratory mood, and while they may select a good password, they may also select a simple one that’s easy to remember. That password, initially, will not be protecting very much, and so people may be tempted to pick one that’s ‘appropriate’ for what’s being protected.

Over time, the danger is that they will not think to update that password and improve it, but their trust in the password manager will increase. As their trust increases, the number of passwords that they’re protecting with a weak master password may also increase.

Now we get to changing the master password. Assuming that people can find it, how often will someone wake up and say “hey, I should change my master password?” Changing a master password is also scary. Now that I’ve accumulated hundreds of passwords, what happens if I forget my new password? (As it turns out, 1Password makes weekly backups of my password file, but I wasn’t aware of that. Also, what happens to the old files if I change my master password? Am I now exposed for both? That’s ok in the case that I’m changing out of caution, less ok if I’m changing because I think my master was exposed.)

Perhaps there’s room for two features here: first, that on password change, people could choose to have either master password unlock things. (Encrypt the master key with keys derived from both the old & new masters. This is no less secure than having backups available, and may address a key element of psychological acceptability.) You’d have to communicate that this will work, and let people choose. User testing that text would be fascinating.

A second feature might be to let people know how long they’ve been using the same master password, and gently encourage them to change it. This one is tricky mostly because I have no idea if it’s a good idea. Should you pick one super-strong master and use it for decades? Is there value to changing it now and again? Where could we seek evidence with which to test our instincts? What happens to long term memory as people age? Does muscle memory cause people to revert their passwords? (I know I’ve done it.) We could use a pattern like the gold bar to unobtrusively prompt.

A last element that might improve the way people use master passwords would be better browser integration. Having just gone to check, I was surprised how many sites my browser is tracking. Almost all of them were low value, and all of them now are. But why do we have two places that can store this, especially when one is less secure than the other. A browser API that allows a password manager to say “I’ve got this one” would be a welcome improvement.

Studying these ideas and seeing which ones are invalidated by data gathering would be cool. Talking to people about how they use their password managers would also be interesting work. As Bonneau has show, the quest to replace passwords is going to be arduous. Learning how to better live with what we have seems useful.