TSA Approach to Threat Modeling, Part 3

It’s often said that the TSA’s approach to threat modeling is to just prevent yesterday’s threats. Well, on Friday it came out that:

So, here you see my flight information for my United flight from PHX to EWR. It is my understanding that this is similar to digital boarding passes issued by all U.S. Airlines; so the same information is on a Delta, US Airways, American and all other boarding passes. I am just using United as an example. I have X’d out any information that you could use to change my reservation. But it’s all there, PNR, seat assignment, flight number, name, ect. But what is interesting is the bolded three on the end. This is the TSA Pre-Check information. The number means the number of beeps. 1 beep no Pre-Check, 3 beeps yes Pre-Check. On this trip as you can see I am eligible for Pre-Check. Also this information is not encrypted in any way.

Security Flaws in the TSA Pre-Check System and the Boarding Pass Check System.

So, apparently, they’re not even preventing yesterday’s threats, ones they knew about before the recent silliness or the older silliness. (See my 2005 post, “What Did TSA Know, and When Did They Know It?.)”

What are they doing? Comments welcome.

Please Kickstart Elevation of Privilege

Jan-Tilo Kirchhoff asked on Twitter for a printer (ideally in Germany) to print up some Elevation of Privilege card sets. Deb Richardson then suggested Kickstarter.

I wanted to comment, but this doesn’t fit in a tweet, so I’ll do it here.

I would be totally excited for someone to Kickstarter production of Elevation of Privilege. Letting other people make it, and make money on it, was an explicit goal of the Creative Commons license (CC-BY-3.0) that we selected when we released the game.

So why don’t I just set up a Kickstarter? In short, I think it’s a caesar’s wife issue. I think there’s a risk that it looks bad for me to decide to release things that Microsoft paid me to do, and then make money off of them.

Now, that impacts me. It doesn’t impact anyone else. I would be totally excited for someone else to go make some cards and sell them. I would promote such a thing, and help people find whatever lovely capitalist is doing it. I would be happy to support a Kickstarter campaign, and would be willing to donate some of my time and energy with things like signing decks, doing a training sessions, or whatnot. I even have some joker cards that you could produce as a special bonus item.

So, if you think Elevation of Privilege is cool, please, go take advantage of the license we released it under, and go make money with it.

[Update: I don’t have exact numbers, but have seen quotes for quantities around 5,000 decks, production might be around $2-3 a deck. At smaller quantities, you might end up around $5-7 a deck. YMMV. So a Kickstarter in the range of $5-10K would probably be workable, although you’d certainly want to think about shipping and handling costs.]

Does 1Password Store Passwords Securely?

In ““Secure Password Managers” and “Military-Grade Encryption” on Smartphones: Oh, Really?” Andrey Belenko and Dmitry Sklyarov write quite a bit about a lot of password management tools. This is admirable work, and I’m glad BlackHat provided a forum for it. However, as a user of 1Password, I was concerned to read the following about that program:

However, because PKCS7 padding is used when encrypting database encryption key, it is possible to verify password just by computing KEK (using MD5 hash function), decrypting last block of encrypted database key, and checking if it equals to 16 bytes with value 0x10 (this will be the PKCS7-compliant padding when encrypting data whose length is exactly N blocks of underlying cipher). Thus, very fast password recovery attack is possible, requiring one MD5 computation and one AES trial decryption per password.

As a result of this design issue, password guessing against passwords [stored by 1Password for iPhone] is estimated (by Belenko and Sklyarov) as 15 Million per second. This is the 3rd worst performance out of a group of 11, and 3,000-fold worse than the best performer in the table (Strip Lite Password Manager, at 5,000 per second).

The folks at Agile Bits, makers of 1Password took the time to blog about the paper, and accept the implications of the work in “Strong Security Requires Strong Passwords.”

However, I think they misunderstand the paper and the issue when they write:

The main reason the password can be determined so quickly is because 6 characters provide relatively few possible password combinations.

I believe the main reason for the issue is because of the way in which 1Password has chosen to store passwords. They alude to this further down in the post when they write:

With that said, as Dmitry and Andrey point out, 1Password could do more to slow the password discovery process, thereby making it take even longer. For example, on the desktop (both Windows and Mac), 1Password uses PBKDF2 to significantly slow down attackers. Currently this is not available on iOS as we needed to support older devices. The next major release of 1Password will only support iOS 5 and at that time we will be incorporating these additional defences.

I still don’t think that’s an adequate response. Several of their competitors on iOS use their own implementation of PBKDF2. Now that’s a risky thing to do, and I’m aware that it might be expensive to implement and test, and the impact of a bug in such code might reasonably be pretty high. So it’s not a slam dunk to do so, in the general case. But in this case, it appears that Apple ships an open source version of PBKDF2: http://opensource.apple.com/source/CommonCrypto/CommonCrypto-55010/Source/API/CommonKeyDerivation.c. So the risk is far lower than creating a new implementation. Therefore, I think Agile Bits should change the way it validates passwords, and incorporate PBKDF2 into all versions of 1Password soon.

They also state:

1Password for iPhone will no longer allow items to be protected by just the PIN code. The PIN code was meant for less sensitive items and we always expected the Master Password protection to be enabled on important items. To simplify things, all items will be protected with the Master Password, just like on iPad, Mac, and Windows.

I understand the choice to do this, and move to stronger protection for all items. At the same time, I like the PIN-only protection for my low-value password. Entering passwords on a phone is a pain. It’s not an easy trade-off, and a 4-digit PIN is always going to be easy to brute force with modern CPUs, however much salting and stretching is applied. I’m capable of making a risk management decisions, but I also understand that many people may feel that Agile Bits wouldn’t offer the choice if it wasn’t secure. I respect the choice that Agile Bits is making to force stronger protection on all their customers.

In summary, 1Password is not storing passwords as securely as they could, and if your phone is stolen, or your phone backups are accessed, those choices leave your passwords at more risk than competing products. I don’t think the fixes to this require iOS5. I think the right thing for Agile Bits to do is to ship an update with better protection against brute force attacks for all their customers, and to do so soon.

[Update 3 (April 10) Agile Bits has released an update which implements 10K PBKDF2 iterations.]

[Update 2: 1Password has now stated that they will do this, adding PBKDF2 to all versions for iOS, which had been the only platform impacted by these issues. They have a hard balance of speed versus security to make, and I encourage them to think it through and test appropriately, rather than rushing a bad fix. ]

[Updated to clarify that this applies only to the iPhone version of 1Password.]

Browser Privacy & Fingerprinting

Ivan Szekely writes in email:

A team of young researchers – my colleagues – at the Budapest University of Technology and Economics developed a cross-browser fingerprinting system in order to demonstrate the weaknesses of the most popular browsers. Taking Panopticlick’s idea as a starting point, they developed a new, browser-independent fingerprinting algorithm and started to build a system-fingerprint database for further analysis. The description of the method and the analysis of the fingerprints can be read at http://pet-portal.eu/articles/view/37/2012-02-20-User-Tracking-on-the-Web-via-Cross-Browser-Fingerprinting.php (thesite is tri-lingual, if other language articles appear on your screen, click on the English flag)

By now the team has developed a new version of the fingerprinting system and is working on an effective method to prevent fingerprinting. In order to fine-tune the defense against fingerprinting, my colleagues need your feedback. Please click on http://fingerprint.pet-portal.eu, make a few tests and share your comments and suggestions with the developers.

Please take a second to visit http://fingerprint.pet-portal.eu and help them and us understand browser fingerprinting.

The output of a threat modeling session, or the creature from the bug lagoon

Wendy Nather has continued the twitter conversation which is now a set of blog posts. (My comments are threat modeling and risk assessment, and hers: “That’s not a bug, it’s a creature. “)

I think we agree on most things, but I sense a little semantic disconnect in some things that he says:

The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.

I consider the word “bug” to refer to an error or unintended functionality in the existing code, not a potential vulnerability in what is (hopefully) still a theoretical design. So if you’re doing whiteboard threat modeling, the output should be “things not to do going forward.”

As a result, you’re stuck with something to mitigate, probably by putting in extra security controls that you otherwise wouldn’t have needed. I consider this a to-do list, not a bug list.
(“That’s not a bug, it’s a creature. “, Wendy Nather)

I don’t disagree here, but want to take it one step further. I see a list of “things not to do going forward” and a “todo list” as an excellent start for a set of tests to confirm that those things happen or don’t. So you file bugs, and those bugs get tracked and triaged and ideally closed as resolved or fixed when you have a test that confirms that they ain’t happening. If you want to call this something else, that’s fine–tracking and managing bugs can be too much work. The key to me is that the “things not to do” sink in, and to to-do list gets managed in some good way.

And again, I agree with her points about probability, and her point that it’s lurking in people’s minds is an excellent one, worth repeating:

the conversation with the project manager, business executives, and developers is always, always going to be about probability, even as a subtext. Even if they don’t come out and say, “But who would want to do that?” or “Come on, we’re not a bank or anything,” they’ll be thinking it when they estimate the cost of fixing the bug or putting in the mitigations.

I simply think the more you focus threat modeling on the “what will go wrong” question, the better. Of course, there’s an element of balance: you don’t usually want to be movie plotting or worrying about Chinese spies replacing the hard drive before you worry about the lack of authentication in your network connections.

Threat Modeling and Risk Assessment

Yesterday, I got into a bit of a back and forth with Wendy Nather on threat modeling and the role of risk management, and I wanted to respond more fully.

So first, what was said:

(Wendy) As much as I love Elevation of Privilege, I don’t think any threat modeling is complete without considering probability too.
(me) Thanks! I’m not advocating against risk, but asking when. Do you evaluate bugs 2x? Once in threat model & once in bug triage?
(Wendy) Yes, because I see TM as being important in design, when the bugs haven’t been written in yet. 🙂

I think Wendy and I are in agreement that threat modeling should happen early, and that probability is important. My issue is that I think issues discovered by threat modeling are, in reality, dealt with by only a few of Gunnar’s top 5 influencers.

I think there are two good reasons to consider threat modeling as an activity that produces a bug list, rather than a prioritized list. First is that bugs are a great exit point for the activity, and second, bugs are going to get triaged again anyway.

First, bugs are a great end point. An important part of my perspective on threat modeling is that it works best when there’s a clear entry and exit point, that is, when developers know when the threat modeling activity is done. (Window Snyder, who knows a thing or two about threat modeling, raised this as the first thing that needed fixing when I took my job at Microsoft to improve threat modeling.) Developers are familiar with bugs. If you end a strange activity, such as threat modeling, with a familiar one, such as filing bugs, developers feel empowered to take a next step. They know what they need to do next.

And that’s my second point: developers and development organizations triage bugs. Any good development organization has a way to deal with bugs. The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.

So if you expect that bugs will work better then you’re left with the important question that Wendy is raising: when do you consider probability? That’s going to happen in bug triage anyway, so why bother including it in threat modeling? You might prune the list and avoid entering silly bugs. That’s a win. But if you capture your risk assessment process and expertise within threat modeling, then what happens in bug triage? Will the security expert be in the room? Do you have a process for comparing security priority to other priorities? (At Microsoft, we use security bug bars for this, and a sample is here.)

My concern, and the reason I got into a back and forth, is I suspect that putting risk assessment into threat modeling keeps organizations from ensuring that expertise is in bug triage, and that’s risky.

(As usual, these opinions are mine, and may differ from those of my employer.)

[Updated to correct editing issues.]

Elevation of Privilege (Web Edition) Question

Someone wrote to me to ask:

A few cards are not straightforward to apply to a webapp situation (some seem assume a proprietary client) – do you recommend discarding them or perhaps you thought of a way to rephrase them somehow?

For example:

“An attacker can make a client unavailable or unusable but the problem goes away when the attacker stops”

I don’t have a great answer, but I’m thinking someone else might have taken it on.

For Denial of Service attacks in the Microsoft SDL bug bar, we roughly to break things down to a matrix of (server, client, persistent/temporary). That doesn’t seem right for web apps. Is there a better approach, and perhaps even one that can translate into some good threat cards?

The 1st Software And Usable Security Aligned for Good Engineering (SAUSAGE) Workshop

National Institute of Standards and Technology
Gaithersburg, MD USA
April 5-6, 2011

Call for Participation

The field of usable security has gained significant traction in recent years, evidenced by the annual presentation of usability papers at the top security conferences, and security papers at the top human-computer interaction (HCI) conferences. Evidence is growing that significant security vulnerabilities are often caused by security designers’ failure to account for human factors. Despite growing attention to the issue, these problems are likely to continue until the underlying development processes address usable security.

See http://www.thei3p.org/events/sausage2011.html for more details.

6502 Visual Simulator

In 6502 visual simulator, Bunnie Huang writes:

It makes my head spin to think that the CPU from the first real computer I used, the Apple II, is now simulateable at the mask level as a browser plug-in. Nothing to install, and it’s Open-licensed. How far we have come…a little more than a decade ago, completing a project like this would have resulted in a couple PhDs being awarded, or regarded as trade secret by some big EDA vendor. This is just unreal…but very cool!’

Visual6502.org, via Justin Mason

Use crypto. Not too confusing. Mostly asymmetric.

A little ways back, Gunnar Peterson said “passwords are like hamburgers, taste great but kill us in long run wean off password now or colonoscopy later.” I responded: “Use crypto. Not too confusing. Mostly asymmetric.” I’d like to expand on that a little. Not quite so much as Michael Pollan, but a little.

The first sentence, “use crypto” is a simple one. It means more security requires getting away from sending strings as a way to authenticate people at a distance. This applies (obviously) to passwords, but also to SSNs, mother’s “maiden” names, your first car, and will apply to biometrics. Sending a string which represents an image of a fingerprint is no harder to fake than sending a password. Stronger authenticators will need to involve an algorithm and a key.

The second, “not too confusing” is a little more subtle, because there are layers of confusing. There’s developer confusion as the system is implemented, adding pieces, like captchas, without a threat model. There’s user confusion as to what program popped that demand for credentials, what site they’re connecting to, or what password they’re supposed to use. There’s also confusion about what makes a good password when one site demands no fewer than 10 characters and another insists on no more. But regardless, it’s essential that a a strong authentication system be understood by at least 99% of its users, and that the authentication is either mutual or resistant to replay, reflection and man-in-the-middle attacks. In this, “TOFU” is better than PKI. I prefer to call TOFO “persistence” or “key persistence” This is in keeping with Pollan’s belief that things with names are better than things with acronyms.

Finally, “mostly asymmetric.” There are three main building blocks in crypto. They are one way functions, symmetric and asymmetric ciphers. Asymmetric systems are those with two mathematically related keys, only one of which is kept secret. These are better because forgery attacks are harder; because only one party holds a given key. (Systems that use one way functions can also deliver this property.) There are a few reasons to avoid asymmetric ciphers, mostly having to do with the compute capabilities of really small devices like a smartcard or very power limited devices like pacemakers.

So there you have it: Use crypto. Not too confusing. Mostly asymmetric.

Hacker Hide and Seek

Core Security Ariel Waissbein has been building security games for a while now. He was They were kind enough to send a copy of his their “Exploit” game after I released Elevation of Privilege. [Update: I had confused Ariel Futoransky and Ariel Waissbein, because Waissbein wrote the blog post. Sorry!] At Defcon, he and his colleagues will be running a more capture-the-flag sort of game, titled “Hide and seek the backdoor:”

For starters, a backdoor is said to be a piece of code intentionally added to a program to grant remote control of the program — or the host that runs it – to its author, that at the same time remains difficult to detect by anybody else.

But this last aspect of the definition actually limits its usefulness, as it implies that the validity of the backdoor’s existence is contingent upon the victim’s failure to detect it. It does not provide any clue at all into how to create or detect a backdoor successfully.

A few years ago, the CoreTex team did an internal experiment at Core and designed the Backdoor Hiding Game, which mimics the old game Dictionary. In this new game, the game master provides a description of the functionalities of a program, together with the setting where it runs, and the players must then develop programs that fulfill these functionalities and have a backdoor. The game master then mixes all these programs with one that he developed and has no backdoors, and gives these to the players. Then, the players must audit all the programs and pick the benign one.

First, I think this is great, and I look forward to seeing it. I do have some questions. What elements of the game can we evaluate and how? A general question we can ask is “Is the game for fun or to advance the state of the art?” (Both are ok and sometimes it’s unclear until knowledge emerges from the chaos of experimentation.) His blog states “We discovered many new hiding techniques,” which is awesome. Games that are fun and advance the state of the art are very hard to create. It’s a seriously cool achievement.

My next question is, how close is the game to the reality of secure software development? How can we transfer knowledge from one to the other? The rules seem to drive backdoors into most code (assuming they all work, (n-1)/n). That’s unlike reality, with a much higher incidence of backdoors than exist in the wild. I’m assuming that the code will all be custom, and thus short enough to create and audit in a game, which also leads to a higher concentration of backdoors per line of code. That different concentration will reward different techniques from those that could scale to a million lines of code.

More generally, do we know how to evaluate hiding techniques? Do hackers playing a game create the same sort of backdoors as disgruntled employees or industrial spies? Because of this contest and the Underhanded C Contests, we have two corpuses of backdoored code. However, I’m not aware of any corpus of deployed backdoor code which we could compare.

So anyway, I look forward to seeing this game at Defcon, and in the future, more serious games for information security.

Previously, I’ve blogged about the Underhanded C contest here and here

SOUPS Keynote & Slides

This week, the annual Symposium on Usable Privacy and Security (SOUPS) is being held on the Microsoft campus. I delivered a keynote, entitled “Engineers Are People Too:”

In “Engineers Are People, Too” Adam Shostack will address an often invisible link in the chain between research on usable security and privacy and delivering that usability: the engineer. All too often, engineers are assumed to have infinite time and skills for usability testing and iteration. They have time to read papers, adapt research ideas to the specifics of their product, and still ship cool new features. This talk will bring together lessons from enabling Microsoft’s thousands of engineers to threat modeling effectively, share some new approaches to engineering security usability, and propose new directions for research.

A fair number of people have asked for the slides, and they’re here: Engineers Are People Too.

Lady Ada books opening May 11

Ada’s Technical Books is Seattle’s only technical book store located in the Capitol Hill neighborhood of Seattle, Washington. Ada’s specifically carries new, used, & rare books on Computers, Electronics, Physics, Math, and Science as well as hand-picked inspirational and leisure reading, puzzles, brain teasers, and gadgets geared toward the technically minded customer.

From the store’s blog, “Grand Opening: June 11th

I’ve been helping David and Danielle a little with book selection because they’re good folks and I love great bookstores. I encourage Seattle readers to stop by.

Facebook, Here’s Looking at You Kid

The last week and a bit has been bad to Facebook. It’s hard to recall what it was that triggered the avalanche of stories. Maybe it was the flower diagram we mentioned. Maybe it was the New York Times interactive graphic of just how complex it is to set privacy settings on Facebook:

facebook-privacy.jpg

Maybe it was Zuckerberg calling people who trust him “dumb fucks,” or the irony of him telling a journalist that “Having two identities for yourself is an example of a lack of integrity.” Or maybe it was the irony that telling people you believe in privacy while calling them dumb fucks is, really, a better example of a lack of integrity than having two identities.

Maybe it was the Facebook search (try ‘my dui’), or Facebook: The privatization of our Privates and Life in the Company Town. Maybe it was getting on CNN that helped propel it.

It all generated some great discussion like danah boyd’s Facebook and “radical transparency” (a rant). It also generated some not so great ideas like “Poisoning The Well – A Response To Privacy Concerns… ” and “How to protect your privacy from Facebook.” These are differently wrong, and I’ll address them one at a time. First, poisoning the well. I’m a big fan of poisoning the wells of mandatory data collectors. But the goal of Facebook is to connect and share. If you have to poison the data you’re trying to share with your friends, the service is fundamentally broken. Similarly, if you’re so scared of their implicit data collection that you use a different web browser to visit their site, and you only post information you’re willing to see made public, you might as well use more appropriate and specialized sites like Flickr, LinkedIn, Okcupid, Twitter or XBox Live. (I think that covers all the main ways people use Facebook.)

But Facebook’s problems aren’t unique. We’ve heard them before, with sites like Friendster, MySpace, Tribe and Orkut. All followed the same curve of rise, pollution and fall that Facebook is going to follow. It’s inevitable and inherent in the attempt to create a centralized technical implementation of all the myriad ways in which human beings communicate.

Play it Sam…once more, for old time’s sake

I think there are at least four key traps for every single-operator, all-purpose social network.

  1. Friend requests The first big problem is that as everyone you’ve ever had a beer with, along with that kid who beat you up in 3rd grade sends you a friend request, the joy of ‘having lots of friends’ is replaced with the burden of managing lots of ‘friends.’ And as the network grows, so does the burden. Do you really know what that pyronut from college chemistry is up to? Do you want to have to judge the meaning of a conversation in light of today’s paranoia? This leads us to the next problem:
  2. Metaphors Facebook uses two metaphors for relationships: friend and network. Both are now disconnected from their normal English meanings. An f-friend is not the same as a real friend. You might invite a bunch of friends over for drinks. Would you send the same invite to your f-friends list? Similarly, if I were to join Facebook today, I could join a Microsoft network, because I work there (although I’m not speaking for them here). Now, in the time that Facebook has been open to the world, lots of people have gained and lost Microsoft email addresses. Some have been full time employees. Some have been contractors of various types. Some have been fired. Is there a process for managing that? Maybe, we have a large HR department, but I have no idea. One key point is that membership in an f-network is not the same as membership in a real network. The meaning of the words evolve through practice and use. But there’s another issue with metaphors as made concrete through the technical decisions of Facebook programmers: there aren’t enough. I think that there’s also now “fans” available as an official metaphor, but what about salesguy-you-met-at-a-conference-who-won’t-stop bugging-you? The technical options don’t match the nuance with which social beings handle these sorts of questions, and even if they do, telling a computer all that is too much of a bother. (See the chart above for an attempt to make it do something related.)
  3. Privacy means many things Privacy means different things to different people. Even the same person at different times wants very different things, and the costs of figuring out what they will want in some unforeseen future is too high. So privacy issues will keep acting as a thorn in the side for social network systems, and worse for centralized ones.
  4. Different goals Customers & the business have different desires from the system. Customers want fast, free, comprehensive, private, and easy to use. They don’t want to worry about losing their jobs or not getting one. They don’t want to worry about stalkers. They don’t want their sweetie to look over their shoulder and see an ad for diamond rings after talking to their friends about engagement. But hiring managers want to see that embarrassing thing you just said. (Hello, revenue model, although Facebook has not, as far as I know, tapped this one yet.) Stalkers are heavy users who you can show ads to. Advertisers want to show those diamond ring ads. Another example of this is the demand to use your real name. Facebook’s demand that you use your real name is in contrast to 4 of the 5 alternatives up there. Nicknames, psuedonyms, handles, tags are all common all over the web, because, in fact, separating our identities is a normal activity. This is an idea that I talk about frequently. But it’s easier to monetize you if Facebook has your real name.

So I’m shocked, shocked to discover that Facebook is screwed up. A lot of other shocked people are donating to Diaspora ($172,000 of their $10,000 has been pledged. There’s interesting game theory about commitment, delivery on those pledges, and should they just raise a professional round of VC, but this post is already long.) There’s also Appleseed: A Privacy-Centric Facebook Slayer With Working Code.

Now, before I close, I do want to say that I see some of this as self-inflicted, but the underlying arc doesn’t rely on Zuckerberg. It’s not about the folks who work for Zuckerberg, who, for all I know are the smartest, nicest, best looking folks anywhere. It’s about the fundamental model of centralized, all-purpose social networks being broken.

To sum it all up, I’m gonna hand the microphone to Rick:

If you don’t get off that site, you’ll regret it. Maybe not today, maybe not tomorrow, but soon and for the rest of your life. Last night we said a great many things. You said I was to do the thinking for both of us. Well, I’ve done a lot of it since then, and it all adds up to one thing: you’re getting off that Facebook. Now, you’ve got to listen to me! You have any idea what you’d have to look forward to if you stayed here? Nine chances out of ten, we’d both wind up with our privacy in ruins. Isn’t that true, Louie?

Capt. Renault: I’m afraid that Major Zuckerberg will insist.