The Evolution of Secure Things

One of the most interesting security books I’ve read in a while barely mentions computers or security. The book is Petroski’s The Evolution of Useful Things.

Evolution Of useful Things Book Cover

As the subtitle explains, the book discusses “How Everyday Artifacts – From Forks and Pins to Paper Clips and Zippers – Came to be as They are.”

The chapter on the fork is a fine example of the construction of the book.. The book traces its evolution from a two-tined tool useful for holding meat as it was cut to the 4 tines we have today. Petroski documents the many variants of forks which were created, and how each was created with reference to the perceived failings of previous designs. The first designs were useful for holding meat as you cut it, before transferring it to your mouth with the knife. Later designs were unable to hold peas, extract an oyster, cut pastry, or meet a variety of other goals that diners had. Those goals acted as evolutionary pressures, and drove innovators to create new forms of the fork.

Not speaking of the fork, but rather of newer devices, Petroski writes:

Why designers do not get things right the first time may be more understandable than excusable. Whether electronics designers pay less attention to how their devices will be operated, or whether their familiarity with the electronic guts of their own little monsters hardens them against these monsters’ facial expressions, there is a consensus among consumers and reflective critics like Donald Norman, who has characterized “usable design” as the “next competitive frontier,” that things seldom live up to their promise. Norman states flatly, “Warning labels and large instruction manuals are signs of failures, attempts to patch up problems that should have been avoided by proper design in the first place.” He is correct, of course, but how is it that designers have, almost to a person, been so myopic?

So what does this have to do with security?

(No, it’s not “stick a fork in it, it’s done fer.”)

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

In security, we ask for perfection against an ill-defined and ever-growing list of hard-to-understand properties, such as “double-free safety.”

Computer security is in a process of moving from expressing “security” to expressing more precise goals, and the evolution of useful tools for finding, naming, and discussing vulnerabilities will help us express what we want in secure software.

The various manifestations of failure, as have been articulated in case studies throughout this book, provide the conceptual underpinning for understanding the evolving form of artifacts and the fabric of technology into which they are inextricably woven. It is clearly the perception of failure in existing technology that drives inventors, designers, and engineers to modify what others may find perfectly adequate, or at least usable. What constitutes failure and what improvement is not totally objective, for in the final analysis a considerable list of criteria, ranging from the functional to the aesthetic, from the economic to the moral, can come into play. Nevertheless, each criterion must be judged in a context of failure, which, though perhaps much easier than success to quantify, will always retain an aspect of subjectivity. The spectrum of subjectivity may appear to narrow to a band of objectivity within the confines of disciplinary discussion, but when a diversity of individuals and groups comes together to discuss criteria of success and failure, consensus can be an elusive state.

Even if you’ve previously read it, re-reading it from a infosec perspective is worthwhile. Highly recommended.

[As I was writing this, Ben Hughes wrote a closely related post on the practical importance of tradeoffs, “A Dockery of a Sham.”]

What Good is Threat Intelligence Going to do Against That?

As you may be aware, I’m a fan of using Star Wars for security lessons, such as threat modeling or Saltzer and Schroeder. So I was pretty excited to see Wade Baker post “Luke in the Sky with Diamonds,” talking about threat intelligence, and he gets bonus points for crossover title. And I think it’s important that we see to fixing a hole in their argument.

So…Pardon me for asking, but what good is threat intelligence going to do against that?

In many ways, the diamond that Wade’s built shows a good understanding of the incident. (It may focus overmuch on Jedi Panda, to the practical exclusion of R2-D2, who we all know is the driving force through the movies.) The facts are laid out, they’re organized using the model, and all is well.

Most of my issues boil down to two questions. The first is how could any analysis of the Battle of Yavin fail to mention the crucial role played by Obi Wan Kenobi, and second, what the heck do you do with the data? (And a third, about the Diamond Model itself — how does this model work? Why is a lightsaber a capability, and an X-Wing a bit of infrastructure? Why is The Force counted as a capability, not an adversary to the Dark Side?)

To the first question, that of General Kenobi. As everyone knows, General Kenobi had infiltrated and sabotaged the Death Star that very day. The public breach reports state that “a sophisticated actor” was only able to sabotage a tractor beam controller before being caught, but how do we know that’s all he did? He was on board the station for hours, and could easily have disabled tractor beams that worked in the trenches, or other defenses that have not been revealed. We know that his associate, Yoda, was able to see into the future. We have to assume that they used this ability, and, in using it, created for themselves a set of potential outcomes, only one of which is modeled.

The second question is, okay, we have a model of what went wrong, and what do we do with it? The Death Star has been destroyed, what does all that modeling tell us about the Jedi Panda? About the Fortressa? (Which, I’ll note, is mentioned as infrastructure, but not in the textual analysis.) How do we turn data into action?

Depending on where you stand, it appears that Wade falls into several traps in this post. They are:

  • Adversary modeling and missing something. The analysis misses Ben Kenobi, and it barely touches on the fact that the Rebel Alliance exists. Getting all personal might lead an Imperial Commander to be overly focused on Skywalker, and miss the threat from Lando Calrissian, or other actors, to a second Death Star. Another element which is missed is the relationship between Vader and Skywalker. And while I don’t want to get choked for this, there’s a real issue that the Empire doesn’t handle failure well.
  • Hindsight biases are common — so common that the military has a phenomenon it calls ‘fighting the last war.’ This analysis focuses in on a limited set of actions, the ones which succeeded, but it’s not clear that they’re the ones most worth focusing on.
  • Actionability. This is a real problem for a lot of organizations which get interesting information, but do not yet have the organizational discipline to integrate it into operations effectively.

The issues here are not new. I discussed them in “Modeling Attackers and their Motives,” and I’ll quote myself to close:

Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.

What I don’t know about the Diamond Model is how it does a better job at avoiding the traps and helping those who use it do better than other models. (I’m not saying it’s poor, I’m saying I don’t know and would like to see some empirical work on the subject.)

Adam’s new startup

A conversation with an old friend reminded me that there may be folks who follow this blog, but not the New School blog.

Over there, I’ve posted “Improving Security Effectiveness” about leaving Microsoft to work on my new company:

For the last few months, I’ve been working full time and talking with colleagues about a new way for security executives to measure the effectiveness of security programs. In very important ways, the ideas are new and non-obvious, and at the same time, they’re an evolution of the ideas that Andrew and I wrote about in the New School book that inspired this blog.

and about a job opening, “Seeking a technical leader for my new company:”

We have a new way to measure security effectiveness, and want someone who’ll drive to delivering the technology to customers, while building a great place for developers to ship and deploy important technology. We are very early in the building of the company. The right person will understand such a “green field” represents both opportunity and that we’ll have to build infrastructure as we grow.

This person might be a CTO, they might be a Chief Architect. They are certainly an experienced leader with strong references from peers, management and reports.

An Infosec lesson from the “Worst Play Call Ever”

It didn’t take long for the Seahawk’s game-losing pass to get a label.

But as Ed Felten explains, there’s actually some logic to it, and one of his commenters (Chris) points out that Marshawn Lynch scored in only one of his 5 runs from the one yard line this season. So, perhaps in a game in which the Patriots had no interceptions, it was worth the extra play before the clock ran out.

We can all see the outcome, and we judge, post-facto, the decision on that.

Worst play call ever

In security, we almost never see an outcome so closely tied to a decision. As Jay Jacobs has pointed out, we live in a wicked environment. Unfortunately, we’re quick to snap to judgement when we see a bad outcome. That makes learning harder. Also, we don’t usually get a chance to see the logic behind a play and assess it.

If only we had a way to shorten those feedback loops, then maybe we could assess what the worst play call in infosec might be.

And in fact, despite my use of snarky linkage, I don’t think we know enough to judge Sony or ChoicePoint. The decisions made by Spaltro at Sony are not unusual. We hear them all the time in security. The outcome at Sony is highly visible, but is it the norm, or is it an outlier? I don’t think we know enough to know the answer.

Hindsight is 20/20 in football. It’s easy to focus in on a single decision. But the lesson from Moneyball, and the lesson from Pete Carroll is Really, with no second thoughts or hesitation in that at all.” He has a system, and it got the Seahawks to the very final seconds of the game. And then.

One day, we’ll be able to tell management “our systems worked, and we hit really bad luck.”

[Please keep comments civil, like you always do here.]

IOS Subject Key Identifier?

I’m having a problem where the “key identifier” displayed on my ios device does not match the key fingerprint on my server. In particular, I run:

% openssl x509 -in keyfile.pem -fingerprint -sha1

and I get a 20 byte hash. I also have a 20 byte hash in my phone, but it is not that hash value. I am left wondering if this is a crypto usability fail, or an attack.

Should I expect the output of that openssl invocation to match certificate details on IOS, or is that a different hash? What options to openssl should produce the result I see on my phone?

[update: it also does not match the output or a trivial subset of the output of

% openssl x509 -in keyfile.pem -fingerprint -sha256

% openssl x509 -in keyfile.pem -fingerprint -sha512


[Update 2: iOS displays the “X509v3 Subject Key Identifier”, and you can ask openssl for that via -text, eg, openssl x509 -in pubkey.pem -text. Thanks to Ryan Sleevi for pointing me down that path.]

Think Like An Attacker? Flip that advice!

For many years, I have been saying that “think like an attacker” is bad advice for most people. For example:

Here’s what’s wrong with think like an attacker: most people have no clue how to do it. They don’t know what matters to an attacker. They don’t know how an attacker spends their day. They don’t know how an attacker approaches a problem. Telling people to think like an attacker isn’t prescriptive or clear.

And I’ve been challenging people to think like a professional chef to help them understand why it’s not useful advice. But now, I’ve been one-upped, and, depending on audience, I have a new line to use.

Last week, on Veracode’s blog, Pete Chestna provides the perfect flip of “think like an attacker” to re-frame problems for security people. It’s “think like a developer.” If you, oh great security guru, cannot think like a developer, for heavens sake, stop asking developers to think like attackers.

RSA: Time for some cryptographic dogfood

One of the most effective ways to improve your software is to use it early and often.  This used to be called eating your own dogfood, which is far more evocative than the alternatives. The key is that you use the software you’re building. If it doesn’t taste good to you, it’s probably not customer-ready.  And so this week at RSA, I think more people should be eating the security community’s cryptographic dogfood.

As I evangelize the use of crypto to meet up at RSA, I’ve encountered many problems, such as choice of tool, availability of tool across a set of mobile platforms, cost of entry, etc.  Each of these is predictable, but with dogfooding — forcing myself to ask everyone why they want to use an easily wiretapped protocol — the issues stand out, and the companies that will be successful will start thinking about ways to overcome them.

So this week, as you prep for RSA, spend a few minutes to get some encrypted communications tool. The worst that can happen is you’re no more secure than you were before you read this post.

What to do for randomness today?

In light of recent news, such as “FreeBSD washing Intel-chip randomness” and “alleged NSA-RSA scheming,” what advice should we give engineers who want to use randomness in their designs?

My advice for software engineers building things used to be to rely on the OS to get it right. That defers the problem to a small number of smart people. Is that still the right advice, despite recent news? The right advice is pretty clearly not that a normal software engineer building in Ruby on Rails or should go and roll their own. It also cannot be that they spend days wading through debates. Experts ought to be providing guidance on what to do.

Is the right thing to hash together the OS and something else? If so, precisely what something else?

The Psychology of Password Managers

As I think more about the way people are likely to use a password manager, I think there’s real problems with the way master passwords are set up. As I write this, I’m deeply aware that I’m risking going into a space of “it’s logical that” without proper evidence.

Let’s start from the way most people will likely come to a password manager. They’ll be in an exploratory mood, and while they may select a good password, they may also select a simple one that’s easy to remember. That password, initially, will not be protecting very much, and so people may be tempted to pick one that’s ‘appropriate’ for what’s being protected.

Over time, the danger is that they will not think to update that password and improve it, but their trust in the password manager will increase. As their trust increases, the number of passwords that they’re protecting with a weak master password may also increase.

Now we get to changing the master password. Assuming that people can find it, how often will someone wake up and say “hey, I should change my master password?” Changing a master password is also scary. Now that I’ve accumulated hundreds of passwords, what happens if I forget my new password? (As it turns out, 1Password makes weekly backups of my password file, but I wasn’t aware of that. Also, what happens to the old files if I change my master password? Am I now exposed for both? That’s ok in the case that I’m changing out of caution, less ok if I’m changing because I think my master was exposed.)

Perhaps there’s room for two features here: first, that on password change, people could choose to have either master password unlock things. (Encrypt the master key with keys derived from both the old & new masters. This is no less secure than having backups available, and may address a key element of psychological acceptability.) You’d have to communicate that this will work, and let people choose. User testing that text would be fascinating.

A second feature might be to let people know how long they’ve been using the same master password, and gently encourage them to change it. This one is tricky mostly because I have no idea if it’s a good idea. Should you pick one super-strong master and use it for decades? Is there value to changing it now and again? Where could we seek evidence with which to test our instincts? What happens to long term memory as people age? Does muscle memory cause people to revert their passwords? (I know I’ve done it.) We could use a pattern like the gold bar to unobtrusively prompt.

A last element that might improve the way people use master passwords would be better browser integration. Having just gone to check, I was surprised how many sites my browser is tracking. Almost all of them were low value, and all of them now are. But why do we have two places that can store this, especially when one is less secure than the other. A browser API that allows a password manager to say “I’ve got this one” would be a welcome improvement.

Studying these ideas and seeing which ones are invalidated by data gathering would be cool. Talking to people about how they use their password managers would also be interesting work. As Bonneau has show, the quest to replace passwords is going to be arduous. Learning how to better live with what we have seems useful.

1Password & Hashcat

The folks at Hashcat have some interesting observations about 1Password. The folks at 1Password have a response, and I think there’s all sorts of fascinating lessons here.

The crypto conversations are interesting, but at the end of the day, a lot of security is unavoidably contributed by the master password strength. I’d like to offer up a simple contribution. Agilebits should make two non-cryptographic changes in addition to any crypto changes.

These relate to the human end of the issue, and how real humans make decisions. That is, picking a master password is a one time event, and even if there’s a strength meter, factors of memorability, typability, etc all come into play when the user selects a password when first installing 1Password.

Those human factors are not good for security, but I think they’re addressable.

First, the master password entry screens should display the same password strength meter that’s displayed everywhere else. It’s all well and good to discuss in a blog post that people need strong master passwords, but the software should give regular feedback about the strength of that master password. Displaying a strength meter each time it’s entered creates some small risk of information disclosure via shoulder-surfing, and adds pressure to make it stronger.

Second, they should make it easier to change the master password. I looked around, couldn’t figure out how to do so in a few minutes. [Update: It’s in preferences, security. I thought I’d looked there, may have missed it.]


If master passwords are so important, then it’s important for the software to help its customers get them right.

There’s an interesting link here to “Why Johnny Can’t Encrypt.” In that 1999 paper, Whitten and Tygar made the point that all the great crypto in PGP couldn’t protect its users if they didn’t make the right decisions, and making those decisions is hard.

In this case, the security of password vaults depends not only on the crypto, but also on the user interface. Figuring out the mental models that people have around password storage tools, and how the interface choices those tools make develop those mental models is an important area, and deserves lots of careful attention.