“Think Like an Attacker” is an opt-in mistake

I’ve repeatedly spoken out against “think like an attacker.”

Now I’m going to argue from authority. In this long article, “The Obama Doctrine,” the President of the United States says “The degree of tribal division in Libya was greater than our analysts had expected.”

So let’s think about that statement and what it means. First, it means that the multi-billion dollar analytic apparatus of the United States made a mistake, a serious one about which the President cares, because it impacted his foreign policy. Second, that mistake was about how people think. Third, that group of people was a society, and one that has interacted with the United States since, oh, I don’t know, someone wrote words like “From the halls of Montezuma to the shores of Tripoli.” (And dig the Marines, kickin’ it old skool with that video.) Fourth, it was not a group that attempts to practice operational security in any way.

So if we consider that the analytical capability of the US can get that wrong, do you really want to try to think like Anonymous, think like 61398, like 8200? Are you going to do this perfectly, or are there chances to make mistakes? Alternately, do you want to require everyone who threat models to know how attackers think? Understanding how other people think and prioritize requires a great deal of work. There are entire fields, like anthropology and sociology dedicated to doing it well. Should we start our defense by reading books on the motivational structures of the PLA or the IDF?

The simple fact is, you don’t need to. You can start from what people are building or deploying. (I wrote a book on how.) The second simple fact is repeating that phrase upsets people. When I first joined Microsoft, I used that phrase. One day, a developer grabbed me after a meeting, and politely told me that he didn’t understand it. Oh, wait, this was Microsoft in 2006. He told me I was a fucking idiot and I should give useful advice. After a bit more conversation, he also told me that he had no idea how the fuck an attacker thought, and if I thought he had time to read a book to learn about it, I could write the goddamned features customers pay for while he read.

Every time someone tells me to think like an attacker, I think about that conversation. I appreciate the honesty that the fellow showed, if not his manner. But (as Dave Weinstein pointed out) “A generalized form of this would be ‘Stop giving developers completely un-actionable “guidance”.’ Now, Dave and I worked together at Microsoft, so maybe there’s a similar experience in his past.

Now, this does not mean that we don’t need to pay attention to what real attackers do. It means that we don’t need to walk a mile in their shoes to defend effectively against it.

Previously, “Think Like An Attacker?,” “The Discipline of “think like an attacker”,” and “Think Like An Attacker? Flip that advice!.”

Secure Code is Hard, Let’s Make it Harder!

I was confused about why Dan Kaminsky would say CVE-2015-7547 (a bug in glbc’s DNS handling) creates network attack surface for sudo. Chris Rohlf kindly sorted me out by mentioning that there’s now a -host option to sudo, of which I was unaware.

I had not looked at sudo in depth for probably 20 years, and I’m shocked to discover that it has a -e option to invoke an editor, a -p option to process format string bugs, and a -a to allow the invoker to select authentication type(?!?!)

It’s now been a fully twenty years that I’ve been professionally involved in analyzing source code. (These Security Code Review Guidelines were obviously not started in August.) We know that all code has bugs, and more code is strongly correlated with more bugs. I first saw this in the intro to the first edition of Cheswick and Bellovin. I feel a little bit like yelling you kids get off my lawn, but really, the unix philosophy of “do one thing well” was successful for a reason. The goal of sudo is to let the user go through a privilege boundary. It should be insanely simple. [Updated to add, Justin Cormack mentions that OpenBSD went from sudo to doas on this basis.]

It’s not. Not that ssh is simple either, but it isolates complexity, and helps us model attack surface more simply.

Some of the new options make sense, and support security feature sets not present previously. Some are just dumb.

As I wrote this, Dan popped up to say that it also parses /etc/hostname to help it log. Again, do one thing well. Syslog should know what host it’s on, what host it’s transmitting from, and what host its receiving from.

It’s very, very hard to make code secure. When we add in insane options to code, we make it even harder. Sometimes, other people ask us to make the code less secure, and while I’ve already said what I want to say about the FBI asking Apple to fix their mistake by writing new code, this is another example of shooting ourselves in our feet.

Please stop making it harder.

[Update: related “Not-quite-so-broken TLS: lessons in re-engineering a security protocol specification and implementation,” abstracted by the morning paper” which examines an approach to re-implementing TLS, thanks to Steve Bellovin for the pointer.]

Open Letters to Security Vendors

John Masserini has a set of “open letters to security vendors” on Security Current.

Everyone involved in product or sales at a security startup should read them. John provides insight into what it’s like to be pitched by too many startups, and provides a level of transparency that’s sadly hard to find. Personally, I learned a great deal about what happens when you’re pitched while I was at a large company, and I can vouch for the realities he puts forth. The sooner you understand those realities and incorporate them into your thinking, the more successful we’ll all be.

After meeting with dozens of startups at Black Hat a few weeks ago, I’ve realized that the vast majority of the leaders of these new companies struggle to articulate the value their solutions bring to the enterprise.

Why does John’s advice make us all more successful? Because each organization that follows it moves towards a more efficient state, for themselves and for the folks who they’re pitching.

Getting more efficient means you waste less time per prospect. When you focus on qualified leads who care about the problem you’re working on, you get more sales per unit of time. What’s more, by not wasting the time of those who won’t buy, you free up their time for talking to those who might have something to provide them. (One banker I know said “I could hire someone full-time to reject startup pitches.” Think about what that means for your sales cycle for a moment.)

Go read “An Open Letter to Security Vendors” along with part 2 (why sales takes longer) and part 3 (the technology challenges most startups ignore).

Towards a model of web browser security

One of the values of models is they can help us engage in areas where otherwise the detail is overwhelming. For example, C is a model of how a CPU works that allows engineers to defer certain details to the compiler, rather than writing in assembler. It empowers software developers to write for many CPU architectures at once. Many security flaws happen in areas the models simplify. For example, what if the stack grew away from the stack pointer, rather than towards it? The layout of the stack is a detail that is modeled away.

Information security is a broad industry, requiring and rewarding specialization. We often see intense specialization, which can naturally result in building expertise in silos, and tribal separation of knowledge create more gaps. At the same time, there is a stereotype that generalists end up focused on policy, or “risk management” where a lack of technical depth can hide. (That’s not to say that all risk managers are generalists or that there’s not real technical depth to some of them.)


If we want to enable more security generalists, and we want those generalists to remain grounded, we need to make it easier to learn about new areas. Part of that is good models, part of that is good exercises that appropriately balance challenge to skill level, part of that is the availability of mentoring, and I’m sure there are other parts I’m missing.

I enjoyed many things about Michael Zalewski’s book “The Tangled Web.” One thing I wanted was a better way to keep track of who attacks whom, to help me contextualize and remember the attacks. But such a model is not trivial to create. This morning, motivated by a conversation between Trey Ford and Chris Rohlf, I decided to take a stab at drafting a model for thinking about where the trust boundaries exist.

The words which open my threat modeling book are “all models are wrong, some models are useful.” I would appreciate feedback on this model. What’s missing, and why does it matter? What attacks require showing a new element in this software model?

Browser security
[Update 1 — please leave comments here, not on Twitter]

  1. Fabio Cerullo suggests the layout engine. It’s not clear to me what additional threats can be seen if you add this explicitly, perhaps because I’m not an expert.
  2. Fernando Montenegro asks about network services such as DNS, which I’m adding and also about shared trust (CA Certs), which overlap with a question about supply chain from Mayer Sharma.
  3. Chris Rohlf points out the “web browser protection profile.

I could be convinced otherwise, but think that the supply chain is best addressed by a separate model. Having a secure installation and update mechanism is an important mitigation of many types of bugs, but this model is for thinking about the boundaries between the components.

In reviewing the protection profile, it mentions the following threats:

Threat Comment
Malicious updates Out of scope (supply chain)
Malicious/flawed add on Out of scope (supply chain)
Network eavesdropping/attack Not showing all the data flows for simplicity (is this the right call?)
Data access Local storage is shown

Also, the protection profile is 88 pages long, and hard to engage with. While it provides far more detail and allows me to cross-check the software model, it doesn’t help me think about interactions between components.

End update 1]

Adam’s new startup

A conversation with an old friend reminded me that there may be folks who follow this blog, but not the New School blog.

Over there, I’ve posted “Improving Security Effectiveness” about leaving Microsoft to work on my new company:

For the last few months, I’ve been working full time and talking with colleagues about a new way for security executives to measure the effectiveness of security programs. In very important ways, the ideas are new and non-obvious, and at the same time, they’re an evolution of the ideas that Andrew and I wrote about in the New School book that inspired this blog.

and about a job opening, “Seeking a technical leader for my new company:”

We have a new way to measure security effectiveness, and want someone who’ll drive to delivering the technology to customers, while building a great place for developers to ship and deploy important technology. We are very early in the building of the company. The right person will understand such a “green field” represents both opportunity and that we’ll have to build infrastructure as we grow.

This person might be a CTO, they might be a Chief Architect. They are certainly an experienced leader with strong references from peers, management and reports.

Wassenaar Restrictions on Speech

[There are broader critiques by Katie Moussouris of HackerOne at “Legally Blind and Deaf – How Computer Crime Laws Silence Helpful Hackers” and Halvar Flake at “Why changes to Wassenaar make oppression and surveillance easier, not harder.” This post addresses the free speech issue.]

During the first crypto wars, cryptography was regulated under the US ITAR regulations as a dual use item, and to export strong crypto (and thus, economically to include it in a generally available commercial or open source product) was effectively impossible.

A principle of our successful work to overcome those restrictions was that code is speech. Thus restrictions on code are restrictions on speech. The legal incoherence of the regulations was brought to an unavoidable crises by Phil Karn, who submitted both the book Applied Cryptography and a floppy disk with the source code from the book for an export license. The book received a license, the disk did not. This was obviously incoherent and Kafka-esque. At the time, American acceptance of incoherent, Kafka-esque rules was in much shorter supply.

Now, the new Wassenaar rules appear to contain restrictions on the export of a different type of code (page 209, category 4, see after the jump). (FX drew attention to this issue in this tweet. [Apparently, I wrote this in Jan, 2014, and forgot to hit post.])

A principle of our work was that code is speech. Thus restrictions on code are restrictions on speech. (Stop me if you’ve heard this one before.) I put forth several tweets that contain PoC I was able to type from memory, each of which, I believe, in principle, could violate the Wassenaar rules. For example:

  • rlogin -froot $target
  • echo wiz | nc $target 25

It would be nice if someone would file for the paperwork to export them on paper.

In this tweet, I’m not speaking for my employer or yours. I am speaking for poor, tired and hungry cryptographers, yearning to breathe free, and to not live on groundhog day.

Continue reading

Think Like An Attacker? Flip that advice!

For many years, I have been saying that “think like an attacker” is bad advice for most people. For example:

Here’s what’s wrong with think like an attacker: most people have no clue how to do it. They don’t know what matters to an attacker. They don’t know how an attacker spends their day. They don’t know how an attacker approaches a problem. Telling people to think like an attacker isn’t prescriptive or clear.

And I’ve been challenging people to think like a professional chef to help them understand why it’s not useful advice. But now, I’ve been one-upped, and, depending on audience, I have a new line to use.

Last week, on Veracode’s blog, Pete Chestna provides the perfect flip of “think like an attacker” to re-frame problems for security people. It’s “think like a developer.” If you, oh great security guru, cannot think like a developer, for heavens sake, stop asking developers to think like attackers.

CERT, Tor, and Disclosure Coordination

There’s been a lot said in security circles about a talk on Tor being pulled from Blackhat. (Tor’s comments are also worth noting.) While that story is interesting, I think the bigger story is the lack of infrastructure for disclosure coordination.

Coordinating information about vulnerabilities is a socially important function. Coordination makes it possible for software creators to create patches and distribute them so that those with the software can most easily protect themselves.

In fact, the function is so important that it was part of why CERT was founded to: “coordinate response to internet security incidents.” Now, incidents has always been more than just vulnerabilities, but vulnerability coordination was a big part of what CERT did.

The trouble is, it’s not a big part anymore. [See below for a clarification added August 21.] Now “The CERT Division works closely with the Department of Homeland Security (DHS) to meet mutually set goals in areas such as data collection and mining, statistics and trend analysis, computer and network security, incident management, insider threat, software assurance, and more.” (Same “about” link as before.)

This isn’t the first time that I’ve heard about an issue where CERT wasn’t able to coordinate disclosure. I want to be clear, I’m not critiquing CERT or their funders here. They’ve set priorities and strategies in a way that makes sense to them, and as far as I know, there’s been precious little pressure to have a vuln coordination function.

It’s time we as a security community talk about the infrastructure, not as a flamewar over coordination/responsibility/don’t blow your 0day, but rather, for those who would like to coordinate, how should they do so?

Heartbleed is an example of what can happen with an interesting vulnerability and incomplete coordination. (Thanks to David Mortman for pointing that out in reviewing a draft.) Systems administrators woke up Monday morning to incomplete information, a marketing campaign, and a slew of packages that hadn’t been updated.


Disclosure coordination is hard to do. There’s a lot of project management and cross-organizational collaboration. Doing that work requires a special mix of patience and urgency, along with an unusual mix of technical skill with diplomatic communication. Those requirements mean that the people who do the work are rare and expensive. What’s more, it’s helpful to have these people seated at a relatively neutral party. (In the age of governments flooding money into cyberwar, it’s not clear if there are any truly neutral parties available. Some disclosure coordination is managed by big companies with a stake in the issue, which is helpful, but it’s hard for researchers to predict and depend apon.) These issues are magnified because those who are great at vulnerability research rarely spend time to develop those skills, and so an intermediary is even more valuable.

Even setting that aside, is there anyone who’s stepping up to the plate to help researchers effectively coordinate and manage disclosure?

[Update: I had a good conversation with some people at CERT/CC, in which I learned that they still do coordination work, especially in cases where the vendor is dealing with an issue for the first time, the issue is multi-vendor, or where there’s a conflict between vendor and researcher. The best way to do that is via their vulnerability reporting form, but if things don’t work, you can also email cert@cert.org or call their hotline (the number is on their contact us page.]

[Update 2, Dec 2014: Allen Householder has a really interesting model of “Vulnerability Coordination and Concurrency Modeling” on the CERT/CC blog, which shows some of the complexity involved, and touches on topics above. Oh, and some really neat Petri-dish state models.]

Seattle event: Ada’s Books

Shostack threat modeling Adas

For Star Wars day, I’m happy to share this event poster for my talk at Ada’s Books in Seattle
Technical Presentation: Adam Shostack shares Threat Modeling Lessons with Star Wars.

This will be a less technical talk with plenty of discussion and interactivity, drawing on some of the content from “Security Lessons from Star Wars,” adapted for a more general audience.