Think Like An Attacker? Flip that advice!

For many years, I have been saying that “think like an attacker” is bad advice for most people. For example:

Here’s what’s wrong with think like an attacker: most people have no clue how to do it. They don’t know what matters to an attacker. They don’t know how an attacker spends their day. They don’t know how an attacker approaches a problem. Telling people to think like an attacker isn’t prescriptive or clear.

And I’ve been challenging people to think like a professional chef to help them understand why it’s not useful advice. But now, I’ve been one-upped, and, depending on audience, I have a new line to use.

Last week, on Veracode’s blog, Pete Chestna provides the perfect flip of “think like an attacker” to re-frame problems for security people. It’s “think like a developer.” If you, oh great security guru, cannot think like a developer, for heavens sake, stop asking developers to think like attackers.

CERT, Tor, and Disclosure Coordination

There’s been a lot said in security circles about a talk on Tor being pulled from Blackhat. (Tor’s comments are also worth noting.) While that story is interesting, I think the bigger story is the lack of infrastructure for disclosure coordination.

Coordinating information about vulnerabilities is a socially important function. Coordination makes it possible for software creators to create patches and distribute them so that those with the software can most easily protect themselves.

In fact, the function is so important that it was part of why CERT was founded to: “coordinate response to internet security incidents.” Now, incidents has always been more than just vulnerabilities, but vulnerability coordination was a big part of what CERT did.

The trouble is, it’s not a big part anymore. [See below for a clarification added August 21.] Now “The CERT Division works closely with the Department of Homeland Security (DHS) to meet mutually set goals in areas such as data collection and mining, statistics and trend analysis, computer and network security, incident management, insider threat, software assurance, and more.” (Same “about” link as before.)

This isn’t the first time that I’ve heard about an issue where CERT wasn’t able to coordinate disclosure. I want to be clear, I’m not critiquing CERT or their funders here. They’ve set priorities and strategies in a way that makes sense to them, and as far as I know, there’s been precious little pressure to have a vuln coordination function.

It’s time we as a security community talk about the infrastructure, not as a flamewar over coordination/responsibility/don’t blow your 0day, but rather, for those who would like to coordinate, how should they do so?

Heartbleed is an example of what can happen with an interesting vulnerability and incomplete coordination. (Thanks to David Mortman for pointing that out in reviewing a draft.) Systems administrators woke up Monday morning to incomplete information, a marketing campaign, and a slew of packages that hadn’t been updated.


Disclosure coordination is hard to do. There’s a lot of project management and cross-organizational collaboration. Doing that work requires a special mix of patience and urgency, along with an unusual mix of technical skill with diplomatic communication. Those requirements mean that the people who do the work are rare and expensive. What’s more, it’s helpful to have these people seated at a relatively neutral party. (In the age of governments flooding money into cyberwar, it’s not clear if there are any truly neutral parties available. Some disclosure coordination is managed by big companies with a stake in the issue, which is helpful, but it’s hard for researchers to predict and depend apon.) These issues are magnified because those who are great at vulnerability research rarely spend time to develop those skills, and so an intermediary is even more valuable.

Even setting that aside, is there anyone who’s stepping up to the plate to help researchers effectively coordinate and manage disclosure?

[Update: I had a good conversation with some people at CERT/CC, in which I learned that they still do coordination work, especially in cases where the vendor is dealing with an issue for the first time, the issue is multi-vendor, or where there’s a conflict between vendor and researcher. The best way to do that is via their vulnerability reporting form, but if things don’t work, you can also email cert@cert.org or call their hotline (the number is on their contact us page.]

Seattle event: Ada’s Books

Shostack threat modeling Adas

For Star Wars day, I’m happy to share this event poster for my talk at Ada’s Books in Seattle
Technical Presentation: Adam Shostack shares Threat Modeling Lessons with Star Wars.

This will be a less technical talk with plenty of discussion and interactivity, drawing on some of the content from “Security Lessons from Star Wars,” adapted for a more general audience.

RSA: Time for some cryptographic dogfood

One of the most effective ways to improve your software is to use it early and often.  This used to be called eating your own dogfood, which is far more evocative than the alternatives. The key is that you use the software you’re building. If it doesn’t taste good to you, it’s probably not customer-ready.  And so this week at RSA, I think more people should be eating the security community’s cryptographic dogfood.

As I evangelize the use of crypto to meet up at RSA, I’ve encountered many problems, such as choice of tool, availability of tool across a set of mobile platforms, cost of entry, etc.  Each of these is predictable, but with dogfooding — forcing myself to ask everyone why they want to use an easily wiretapped protocol — the issues stand out, and the companies that will be successful will start thinking about ways to overcome them.

So this week, as you prep for RSA, spend a few minutes to get some encrypted communications tool. The worst that can happen is you’re no more secure than you were before you read this post.

What to do for randomness today?

In light of recent news, such as “FreeBSD washing Intel-chip randomness” and “alleged NSA-RSA scheming,” what advice should we give engineers who want to use randomness in their designs?


My advice for software engineers building things used to be to rely on the OS to get it right. That defers the problem to a small number of smart people. Is that still the right advice, despite recent news? The right advice is pretty clearly not that a normal software engineer building in Ruby on Rails or asp.net should go and roll their own. It also cannot be that they spend days wading through debates. Experts ought to be providing guidance on what to do.

Is the right thing to hash together the OS and something else? If so, precisely what something else?

TSA Approach to Threat Modeling, Part 3

It’s often said that the TSA’s approach to threat modeling is to just prevent yesterday’s threats. Well, on Friday it came out that:

So, here you see my flight information for my United flight from PHX to EWR. It is my understanding that this is similar to digital boarding passes issued by all U.S. Airlines; so the same information is on a Delta, US Airways, American and all other boarding passes. I am just using United as an example. I have X’d out any information that you could use to change my reservation. But it’s all there, PNR, seat assignment, flight number, name, ect. But what is interesting is the bolded three on the end. This is the TSA Pre-Check information. The number means the number of beeps. 1 beep no Pre-Check, 3 beeps yes Pre-Check. On this trip as you can see I am eligible for Pre-Check. Also this information is not encrypted in any way.

Security Flaws in the TSA Pre-Check System and the Boarding Pass Check System.

So, apparently, they’re not even preventing yesterday’s threats, ones they knew about before the recent silliness or the older silliness. (See my 2005 post, “What Did TSA Know, and When Did They Know It?.)”

What are they doing? Comments welcome.

Please Kickstart Elevation of Privilege

Jan-Tilo Kirchhoff asked on Twitter for a printer (ideally in Germany) to print up some Elevation of Privilege card sets. Deb Richardson then suggested Kickstarter.

I wanted to comment, but this doesn’t fit in a tweet, so I’ll do it here.

I would be totally excited for someone to Kickstarter production of Elevation of Privilege. Letting other people make it, and make money on it, was an explicit goal of the Creative Commons license (CC-BY-3.0) that we selected when we released the game.

So why don’t I just set up a Kickstarter? In short, I think it’s a caesar’s wife issue. I think there’s a risk that it looks bad for me to decide to release things that Microsoft paid me to do, and then make money off of them.

Now, that impacts me. It doesn’t impact anyone else. I would be totally excited for someone else to go make some cards and sell them. I would promote such a thing, and help people find whatever lovely capitalist is doing it. I would be happy to support a Kickstarter campaign, and would be willing to donate some of my time and energy with things like signing decks, doing a training sessions, or whatnot. I even have some joker cards that you could produce as a special bonus item.

So, if you think Elevation of Privilege is cool, please, go take advantage of the license we released it under, and go make money with it.

[Update: I don’t have exact numbers, but have seen quotes for quantities around 5,000 decks, production might be around $2-3 a deck. At smaller quantities, you might end up around $5-7 a deck. YMMV. So a Kickstarter in the range of $5-10K would probably be workable, although you’d certainly want to think about shipping and handling costs.]

Does 1Password Store Passwords Securely?

In ““Secure Password Managers” and “Military-Grade Encryption” on Smartphones: Oh, Really?” Andrey Belenko and Dmitry Sklyarov write quite a bit about a lot of password management tools. This is admirable work, and I’m glad BlackHat provided a forum for it. However, as a user of 1Password, I was concerned to read the following about that program:

However, because PKCS7 padding is used when encrypting database encryption key, it is possible to verify password just by computing KEK (using MD5 hash function), decrypting last block of encrypted database key, and checking if it equals to 16 bytes with value 0x10 (this will be the PKCS7-compliant padding when encrypting data whose length is exactly N blocks of underlying cipher). Thus, very fast password recovery attack is possible, requiring one MD5 computation and one AES trial decryption per password.

As a result of this design issue, password guessing against passwords [stored by 1Password for iPhone] is estimated (by Belenko and Sklyarov) as 15 Million per second. This is the 3rd worst performance out of a group of 11, and 3,000-fold worse than the best performer in the table (Strip Lite Password Manager, at 5,000 per second).

The folks at Agile Bits, makers of 1Password took the time to blog about the paper, and accept the implications of the work in “Strong Security Requires Strong Passwords.”

However, I think they misunderstand the paper and the issue when they write:

The main reason the password can be determined so quickly is because 6 characters provide relatively few possible password combinations.

I believe the main reason for the issue is because of the way in which 1Password has chosen to store passwords. They alude to this further down in the post when they write:

With that said, as Dmitry and Andrey point out, 1Password could do more to slow the password discovery process, thereby making it take even longer. For example, on the desktop (both Windows and Mac), 1Password uses PBKDF2 to significantly slow down attackers. Currently this is not available on iOS as we needed to support older devices. The next major release of 1Password will only support iOS 5 and at that time we will be incorporating these additional defences.

I still don’t think that’s an adequate response. Several of their competitors on iOS use their own implementation of PBKDF2. Now that’s a risky thing to do, and I’m aware that it might be expensive to implement and test, and the impact of a bug in such code might reasonably be pretty high. So it’s not a slam dunk to do so, in the general case. But in this case, it appears that Apple ships an open source version of PBKDF2: http://opensource.apple.com/source/CommonCrypto/CommonCrypto-55010/Source/API/CommonKeyDerivation.c. So the risk is far lower than creating a new implementation. Therefore, I think Agile Bits should change the way it validates passwords, and incorporate PBKDF2 into all versions of 1Password soon.

They also state:

1Password for iPhone will no longer allow items to be protected by just the PIN code. The PIN code was meant for less sensitive items and we always expected the Master Password protection to be enabled on important items. To simplify things, all items will be protected with the Master Password, just like on iPad, Mac, and Windows.

I understand the choice to do this, and move to stronger protection for all items. At the same time, I like the PIN-only protection for my low-value password. Entering passwords on a phone is a pain. It’s not an easy trade-off, and a 4-digit PIN is always going to be easy to brute force with modern CPUs, however much salting and stretching is applied. I’m capable of making a risk management decisions, but I also understand that many people may feel that Agile Bits wouldn’t offer the choice if it wasn’t secure. I respect the choice that Agile Bits is making to force stronger protection on all their customers.

In summary, 1Password is not storing passwords as securely as they could, and if your phone is stolen, or your phone backups are accessed, those choices leave your passwords at more risk than competing products. I don’t think the fixes to this require iOS5. I think the right thing for Agile Bits to do is to ship an update with better protection against brute force attacks for all their customers, and to do so soon.

[Update 3 (April 10) Agile Bits has released an update which implements 10K PBKDF2 iterations.]

[Update 2: 1Password has now stated that they will do this, adding PBKDF2 to all versions for iOS, which had been the only platform impacted by these issues. They have a hard balance of speed versus security to make, and I encourage them to think it through and test appropriately, rather than rushing a bad fix. ]

[Updated to clarify that this applies only to the iPhone version of 1Password.]

Browser Privacy & Fingerprinting

Ivan Szekely writes in email:

A team of young researchers – my colleagues – at the Budapest University of Technology and Economics developed a cross-browser fingerprinting system in order to demonstrate the weaknesses of the most popular browsers. Taking Panopticlick’s idea as a starting point, they developed a new, browser-independent fingerprinting algorithm and started to build a system-fingerprint database for further analysis. The description of the method and the analysis of the fingerprints can be read at http://pet-portal.eu/articles/view/37/2012-02-20-User-Tracking-on-the-Web-via-Cross-Browser-Fingerprinting.php (thesite is tri-lingual, if other language articles appear on your screen, click on the English flag)

By now the team has developed a new version of the fingerprinting system and is working on an effective method to prevent fingerprinting. In order to fine-tune the defense against fingerprinting, my colleagues need your feedback. Please click on http://fingerprint.pet-portal.eu, make a few tests and share your comments and suggestions with the developers.

Please take a second to visit http://fingerprint.pet-portal.eu and help them and us understand browser fingerprinting.

The output of a threat modeling session, or the creature from the bug lagoon

Wendy Nather has continued the twitter conversation which is now a set of blog posts. (My comments are threat modeling and risk assessment, and hers: “That’s not a bug, it’s a creature. “)

I think we agree on most things, but I sense a little semantic disconnect in some things that he says:

The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.

I consider the word “bug” to refer to an error or unintended functionality in the existing code, not a potential vulnerability in what is (hopefully) still a theoretical design. So if you’re doing whiteboard threat modeling, the output should be “things not to do going forward.”

As a result, you’re stuck with something to mitigate, probably by putting in extra security controls that you otherwise wouldn’t have needed. I consider this a to-do list, not a bug list.
(“That’s not a bug, it’s a creature. “, Wendy Nather)

I don’t disagree here, but want to take it one step further. I see a list of “things not to do going forward” and a “todo list” as an excellent start for a set of tests to confirm that those things happen or don’t. So you file bugs, and those bugs get tracked and triaged and ideally closed as resolved or fixed when you have a test that confirms that they ain’t happening. If you want to call this something else, that’s fine–tracking and managing bugs can be too much work. The key to me is that the “things not to do” sink in, and to to-do list gets managed in some good way.

And again, I agree with her points about probability, and her point that it’s lurking in people’s minds is an excellent one, worth repeating:

the conversation with the project manager, business executives, and developers is always, always going to be about probability, even as a subtext. Even if they don’t come out and say, “But who would want to do that?” or “Come on, we’re not a bank or anything,” they’ll be thinking it when they estimate the cost of fixing the bug or putting in the mitigations.

I simply think the more you focus threat modeling on the “what will go wrong” question, the better. Of course, there’s an element of balance: you don’t usually want to be movie plotting or worrying about Chinese spies replacing the hard drive before you worry about the lack of authentication in your network connections.