The Psychology of Password Managers

As I think more about the way people are likely to use a password manager, I think there’s real problems with the way master passwords are set up. As I write this, I’m deeply aware that I’m risking going into a space of “it’s logical that” without proper evidence.

Let’s start from the way most people will likely come to a password manager. They’ll be in an exploratory mood, and while they may select a good password, they may also select a simple one that’s easy to remember. That password, initially, will not be protecting very much, and so people may be tempted to pick one that’s ‘appropriate’ for what’s being protected.

Over time, the danger is that they will not think to update that password and improve it, but their trust in the password manager will increase. As their trust increases, the number of passwords that they’re protecting with a weak master password may also increase.

Now we get to changing the master password. Assuming that people can find it, how often will someone wake up and say “hey, I should change my master password?” Changing a master password is also scary. Now that I’ve accumulated hundreds of passwords, what happens if I forget my new password? (As it turns out, 1Password makes weekly backups of my password file, but I wasn’t aware of that. Also, what happens to the old files if I change my master password? Am I now exposed for both? That’s ok in the case that I’m changing out of caution, less ok if I’m changing because I think my master was exposed.)

Perhaps there’s room for two features here: first, that on password change, people could choose to have either master password unlock things. (Encrypt the master key with keys derived from both the old & new masters. This is no less secure than having backups available, and may address a key element of psychological acceptability.) You’d have to communicate that this will work, and let people choose. User testing that text would be fascinating.

A second feature might be to let people know how long they’ve been using the same master password, and gently encourage them to change it. This one is tricky mostly because I have no idea if it’s a good idea. Should you pick one super-strong master and use it for decades? Is there value to changing it now and again? Where could we seek evidence with which to test our instincts? What happens to long term memory as people age? Does muscle memory cause people to revert their passwords? (I know I’ve done it.) We could use a pattern like the gold bar to unobtrusively prompt.

A last element that might improve the way people use master passwords would be better browser integration. Having just gone to check, I was surprised how many sites my browser is tracking. Almost all of them were low value, and all of them now are. But why do we have two places that can store this, especially when one is less secure than the other. A browser API that allows a password manager to say “I’ve got this one” would be a welcome improvement.

Studying these ideas and seeing which ones are invalidated by data gathering would be cool. Talking to people about how they use their password managers would also be interesting work. As Bonneau has show, the quest to replace passwords is going to be arduous. Learning how to better live with what we have seems useful.

1Password & Hashcat

The folks at Hashcat have some interesting observations about 1Password. The folks at 1Password have a response, and I think there’s all sorts of fascinating lessons here.

The crypto conversations are interesting, but at the end of the day, a lot of security is unavoidably contributed by the master password strength. I’d like to offer up a simple contribution. Agilebits should make two non-cryptographic changes in addition to any crypto changes.

These relate to the human end of the issue, and how real humans make decisions. That is, picking a master password is a one time event, and even if there’s a strength meter, factors of memorability, typability, etc all come into play when the user selects a password when first installing 1Password.

Those human factors are not good for security, but I think they’re addressable.

First, the master password entry screens should display the same password strength meter that’s displayed everywhere else. It’s all well and good to discuss in a blog post that people need strong master passwords, but the software should give regular feedback about the strength of that master password. Displaying a strength meter each time it’s entered creates some small risk of information disclosure via shoulder-surfing, and adds pressure to make it stronger.

Second, they should make it easier to change the master password. I looked around, couldn’t figure out how to do so in a few minutes. [Update: It’s in preferences, security. I thought I’d looked there, may have missed it.]

1password

If master passwords are so important, then it’s important for the software to help its customers get them right.

There’s an interesting link here to “Why Johnny Can’t Encrypt.” In that 1999 paper, Whitten and Tygar made the point that all the great crypto in PGP couldn’t protect its users if they didn’t make the right decisions, and making those decisions is hard.

In this case, the security of password vaults depends not only on the crypto, but also on the user interface. Figuring out the mental models that people have around password storage tools, and how the interface choices those tools make develop those mental models is an important area, and deserves lots of careful attention.

Does 1Password Store Passwords Securely?

In ““Secure Password Managers” and “Military-Grade Encryption” on Smartphones: Oh, Really?” Andrey Belenko and Dmitry Sklyarov write quite a bit about a lot of password management tools. This is admirable work, and I’m glad BlackHat provided a forum for it. However, as a user of 1Password, I was concerned to read the following about that program:

However, because PKCS7 padding is used when encrypting database encryption key, it is possible to verify password just by computing KEK (using MD5 hash function), decrypting last block of encrypted database key, and checking if it equals to 16 bytes with value 0x10 (this will be the PKCS7-compliant padding when encrypting data whose length is exactly N blocks of underlying cipher). Thus, very fast password recovery attack is possible, requiring one MD5 computation and one AES trial decryption per password.

As a result of this design issue, password guessing against passwords [stored by 1Password for iPhone] is estimated (by Belenko and Sklyarov) as 15 Million per second. This is the 3rd worst performance out of a group of 11, and 3,000-fold worse than the best performer in the table (Strip Lite Password Manager, at 5,000 per second).

The folks at Agile Bits, makers of 1Password took the time to blog about the paper, and accept the implications of the work in “Strong Security Requires Strong Passwords.”

However, I think they misunderstand the paper and the issue when they write:

The main reason the password can be determined so quickly is because 6 characters provide relatively few possible password combinations.

I believe the main reason for the issue is because of the way in which 1Password has chosen to store passwords. They alude to this further down in the post when they write:

With that said, as Dmitry and Andrey point out, 1Password could do more to slow the password discovery process, thereby making it take even longer. For example, on the desktop (both Windows and Mac), 1Password uses PBKDF2 to significantly slow down attackers. Currently this is not available on iOS as we needed to support older devices. The next major release of 1Password will only support iOS 5 and at that time we will be incorporating these additional defences.

I still don’t think that’s an adequate response. Several of their competitors on iOS use their own implementation of PBKDF2. Now that’s a risky thing to do, and I’m aware that it might be expensive to implement and test, and the impact of a bug in such code might reasonably be pretty high. So it’s not a slam dunk to do so, in the general case. But in this case, it appears that Apple ships an open source version of PBKDF2: http://opensource.apple.com/source/CommonCrypto/CommonCrypto-55010/Source/API/CommonKeyDerivation.c. So the risk is far lower than creating a new implementation. Therefore, I think Agile Bits should change the way it validates passwords, and incorporate PBKDF2 into all versions of 1Password soon.

They also state:

1Password for iPhone will no longer allow items to be protected by just the PIN code. The PIN code was meant for less sensitive items and we always expected the Master Password protection to be enabled on important items. To simplify things, all items will be protected with the Master Password, just like on iPad, Mac, and Windows.

I understand the choice to do this, and move to stronger protection for all items. At the same time, I like the PIN-only protection for my low-value password. Entering passwords on a phone is a pain. It’s not an easy trade-off, and a 4-digit PIN is always going to be easy to brute force with modern CPUs, however much salting and stretching is applied. I’m capable of making a risk management decisions, but I also understand that many people may feel that Agile Bits wouldn’t offer the choice if it wasn’t secure. I respect the choice that Agile Bits is making to force stronger protection on all their customers.

In summary, 1Password is not storing passwords as securely as they could, and if your phone is stolen, or your phone backups are accessed, those choices leave your passwords at more risk than competing products. I don’t think the fixes to this require iOS5. I think the right thing for Agile Bits to do is to ship an update with better protection against brute force attacks for all their customers, and to do so soon.

[Update 3 (April 10) Agile Bits has released an update which implements 10K PBKDF2 iterations.]

[Update 2: 1Password has now stated that they will do this, adding PBKDF2 to all versions for iOS, which had been the only platform impacted by these issues. They have a hard balance of speed versus security to make, and I encourage them to think it through and test appropriately, rather than rushing a bad fix. ]

[Updated to clarify that this applies only to the iPhone version of 1Password.]

Shocking News of the Day: Social Security Numbers Suck

The firm’s annual Banking Identity Safety Scorecard looked at the consumer-security practices of 25 large banks and credit unions. It found that far too many still rely on customers’ Social Security numbers for authentication purposes — for instance, to verify a customer’s identity when he or she wants to speak to a bank representative over the telephone or re-set a password.

All banks in the report used some version of the Social Security number as a means of authenticating the customer, Javelin found. The pervasive use of Social Security numbers was surprising, given the importance of Social Security numbers as a tool for identity theft, said Phil Blank, managing director of security, risk and fraud at Javelin. (“Banks Rely Too Heavily On Social Security Numbers, Report Finds“, Ann Carrns, New York Times)

Previously here: “Social Security Numbers are Worthless as Authenticators” (2009), or “Bad advice on SSNs” (2005).

Niels Bohr was right about predictions

There’s been much talk of predictions lately, for some reason. Since I don’t sell anything, I almost never make them, but I did offer two predictions early in 2010, during the germination phase of a project a colleague was working on. Since these sort of meet Adam’s criteria by having both numbers and dates, I figured I’d share.

With minor formatting changes, the following is from my email of April, 2010.

Prediction 1

Regulation E style accountholder liability limitation will be extended
to commercial accountholders with assets below some reasonably large
value by 12/31/2010.

Why:  ACH and wire fraud are an increasingly large, and increasingly
public, problem.  Financial institutions will accept regulation in order
to preserve confidence in on-line channel.

WRONG!

Prediction 2

An episode of "state-sponsored SSL certificate fraud/forgery" will make
the public press.

Why: There is insufficient audit of the root certs that browser vendors
innately trust, making it sufficiently easy for a motivated attacker to
"build insecurity in" by getting his untrustworthy root cert trusted by
default.  The recent Mozilla kerfuffle over CNNIC is an harbinger of
this[1].  Similarly, Chris Soghoian's recent work[2] will increase
awareness of this issue enough to result in a governmental actor who has
done it being exposed.

Right!

But only because for this one I forgot to put in a date (I meant to also say “by 12/31/2010”, which makes this one WRONG! too.

I was motivated to make this post because I once again came across Soghoian’s paper just the other day (I think he cited it in a blog post I was reading). He really nailed it. I predict he’ll do so again in 2012.

The output of a threat modeling session, or the creature from the bug lagoon

Wendy Nather has continued the twitter conversation which is now a set of blog posts. (My comments are threat modeling and risk assessment, and hers: “That’s not a bug, it’s a creature. “)

I think we agree on most things, but I sense a little semantic disconnect in some things that he says:

The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.

I consider the word “bug” to refer to an error or unintended functionality in the existing code, not a potential vulnerability in what is (hopefully) still a theoretical design. So if you’re doing whiteboard threat modeling, the output should be “things not to do going forward.”

As a result, you’re stuck with something to mitigate, probably by putting in extra security controls that you otherwise wouldn’t have needed. I consider this a to-do list, not a bug list.
(“That’s not a bug, it’s a creature. “, Wendy Nather)

I don’t disagree here, but want to take it one step further. I see a list of “things not to do going forward” and a “todo list” as an excellent start for a set of tests to confirm that those things happen or don’t. So you file bugs, and those bugs get tracked and triaged and ideally closed as resolved or fixed when you have a test that confirms that they ain’t happening. If you want to call this something else, that’s fine–tracking and managing bugs can be too much work. The key to me is that the “things not to do” sink in, and to to-do list gets managed in some good way.

And again, I agree with her points about probability, and her point that it’s lurking in people’s minds is an excellent one, worth repeating:

the conversation with the project manager, business executives, and developers is always, always going to be about probability, even as a subtext. Even if they don’t come out and say, “But who would want to do that?” or “Come on, we’re not a bank or anything,” they’ll be thinking it when they estimate the cost of fixing the bug or putting in the mitigations.

I simply think the more you focus threat modeling on the “what will go wrong” question, the better. Of course, there’s an element of balance: you don’t usually want to be movie plotting or worrying about Chinese spies replacing the hard drive before you worry about the lack of authentication in your network connections.

Threat Modeling and Risk Assessment

Yesterday, I got into a bit of a back and forth with Wendy Nather on threat modeling and the role of risk management, and I wanted to respond more fully.

So first, what was said:

(Wendy) As much as I love Elevation of Privilege, I don’t think any threat modeling is complete without considering probability too.
(me) Thanks! I’m not advocating against risk, but asking when. Do you evaluate bugs 2x? Once in threat model & once in bug triage?
(Wendy) Yes, because I see TM as being important in design, when the bugs haven’t been written in yet. 🙂

I think Wendy and I are in agreement that threat modeling should happen early, and that probability is important. My issue is that I think issues discovered by threat modeling are, in reality, dealt with by only a few of Gunnar’s top 5 influencers.

I think there are two good reasons to consider threat modeling as an activity that produces a bug list, rather than a prioritized list. First is that bugs are a great exit point for the activity, and second, bugs are going to get triaged again anyway.

First, bugs are a great end point. An important part of my perspective on threat modeling is that it works best when there’s a clear entry and exit point, that is, when developers know when the threat modeling activity is done. (Window Snyder, who knows a thing or two about threat modeling, raised this as the first thing that needed fixing when I took my job at Microsoft to improve threat modeling.) Developers are familiar with bugs. If you end a strange activity, such as threat modeling, with a familiar one, such as filing bugs, developers feel empowered to take a next step. They know what they need to do next.

And that’s my second point: developers and development organizations triage bugs. Any good development organization has a way to deal with bugs. The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.

So if you expect that bugs will work better then you’re left with the important question that Wendy is raising: when do you consider probability? That’s going to happen in bug triage anyway, so why bother including it in threat modeling? You might prune the list and avoid entering silly bugs. That’s a win. But if you capture your risk assessment process and expertise within threat modeling, then what happens in bug triage? Will the security expert be in the room? Do you have a process for comparing security priority to other priorities? (At Microsoft, we use security bug bars for this, and a sample is here.)

My concern, and the reason I got into a back and forth, is I suspect that putting risk assessment into threat modeling keeps organizations from ensuring that expertise is in bug triage, and that’s risky.

(As usual, these opinions are mine, and may differ from those of my employer.)

[Updated to correct editing issues.]

Telephones and privacy

Three stories, related by the telephone, and their impact on privacy:

  • CNN reports that your cell phone is being tracked in malls:

    Starting on Black Friday and running through New Year’s Day, two U.S. malls — Promenade Temecula in southern California and Short Pump Town Center in Richmond, Va. — will track guests’ movements by monitoring the signals from their cell phones.


    Still, the company is preemptively notifying customers by hanging small signs around the shopping centers. Consumers can opt out by turning off their phones.


    The tracking system, called FootPath Technology, works through a series of antennas positioned throughout the shopping center that capture the unique identification number assigned to each phone (similar to a computer’s IP address), and tracks its movement throughout the stores.

    The company in question is Path Intelligence, and they claim that since they’re only capturing IMSI numbers, it’s anonymous. However, the IMSI is the name by which the phone company calls you. It’s a label which identifies a unique phone (or the SIM card inside of it) which is pretty darned closely tied to a person. The IMSI identifies a person more accurately and effectively than an IP address. The EU regulates IP addresses as personally identifiable information. Just because the IMSI is not easily human-readable does not make it anonymous, and does not make it not-a-name.

    It’s really not clear to me how Path Intelligence’s technology is legal anywhere that has privacy or wiretap laws.

  • Kashmir Hill at Forbes reports on “How Israeli Spies Were Betrayed By Their Cell Phones“:

    Using the latest commercial software, Nasrallah’s spy-hunters unit began methodically searching for traitors in Hezbollah’s midst. To find them, U.S. officials said, Hezbollah examined cellphone data looking for anomalies. The analysis identified cellphones that, for instance, were used rarely or always from specific locations and only for a short period of time. Then it came down to old-fashioned, shoe-leather detective work: Who in that area had information that might be worth selling to the enemy?

    This reminds me of the bin Laden story: he was found in part because he had no phone or internet service. What used to be good tradecraft now stands out. Of course, maybe some innocent folks were just opting out of Path Intelligence. Hmmm. I wonder who makes that “latest commercial software” Nasrallah’s team is using?

  • Who’s on the Line? Increasingly, Caller ID Is Duped“, Matt Richtel, The New York Times

    Caller ID has been celebrated as a defense against unwelcome phone pitches. But it is backfiring.

    Telemarketers increasingly are disguising their real identities and phone numbers to provoke people to pick up the phone. “Humane Soc.” may not be the Humane Society. And think the I.R.S. is on the line? Think again.

    Caller ID, in other words, is becoming fake ID.

    “You don’t know who is on the other end of the line, no matter what your caller ID might say,” said Sandy Chalmers, a division manager at the Department of Agriculture, Trade and Consumer Protection in Wisconsin.

    Starting this summer, she said, the state has been warning consumers: “Do not trust your caller ID. And if you pick up the phone and someone asks for your personal information, hang up.”
    ()

    I’m shocked that a badly designed invasion of privacy doesn’t offer the security people think it does.

    When I say badly designed, I’m referring to inline signaling late in the signal, not to mention that the Bells already had ANI. But they didn’t want to risk the privacy concerns with caller-ID impacting on ANI, so they designed an alternative.

What’s Wrong and What To Do About It?

Pike floyd
Let me start with an extended quote from “Why I Feel Bad for the Pepper-Spraying Policeman, Lt. John Pike“:

They are described in one July 2011 paper by sociologist Patrick Gillham called, “Securitizing America.” During the 1960s, police used what was called “escalated force” to stop protesters.

“Police sought to maintain law and order often trampling on protesters’ First Amendment rights, and frequently resorted to mass and unprovoked arrests and the overwhelming and indiscriminate use of force,” Gillham writes and TV footage from the time attests. This was the water cannon stage of police response to protest.

But by the 1970s, that version of crowd control had given rise to all sorts of problems and various departments went in “search for an alternative approach.” What they landed on was a paradigm called “negotiated management.” Police forces, by and large, cooperated with protesters who were willing to give major concessions on when and where they’d march or demonstrate. “Police used as little force as necessary to protect people and property and used arrests only symbolically at the request of activists or as a last resort and only against those breaking the law,” Gillham writes.

That relatively cozy relationship between police and protesters was an uneasy compromise that was often tested by small groups of “transgressive” protesters who refused to cooperate with authorities. They often used decentralized leadership structures that were difficult to infiltrate, co-opt, or even talk with. Still, they seemed like small potatoes.

Then came the massive and much-disputed 1999 WTO protests. Negotiated management was seen to have totally failed and it cost the police chief his job and helped knock the mayor from office. “It can be reasonably argued that these protests, and the experiences of the Seattle Police Department in trying to manage them, have had a more profound effect on modern policing than any other single event prior to 9/11,” former Chicago police officer and Western Illinois professor Todd Lough argued.

Former Seattle police chief Norm Stamper gives his perspective in “Paramilitary Policing From Seattle to Occupy Wall Street“:

“We have to clear the intersection,” said the field commander. “We have to clear the intersection,” the operations commander agreed, from his bunker in the Public Safety Building. Standing alone on the edge of the crowd, I, the chief of police, said to myself, “We have to clear the intersection.”

Why?

Because of all the what-ifs. What if a fire breaks out in the Sheraton across the street? What if a woman goes into labor on the seventeenth floor of the hotel? What if a heart patient goes into cardiac arrest in the high-rise on the corner? What if there’s a stabbing, a shooting, a serious-injury traffic accident? How would an aid car, fire engine or police cruiser get through that sea of people? The cop in me supported the decision to clear the intersection. But the chief in me should have vetoed it. And he certainly should have forbidden the indiscriminate use of tear gas to accomplish it, no matter how many warnings we barked through the bullhorn.

My support for a militaristic solution caused all hell to break loose. Rocks, bottles and newspaper racks went flying. Windows were smashed, stores were looted, fires lighted; and more gas filled the streets, with some cops clearly overreacting, escalating and prolonging the conflict. The “Battle in Seattle,” as the WTO protests and their aftermath came to be known, was a huge setback—for the protesters, my cops, the community.

Product reviews on Amazon for the Defense Technology 56895 MK-9 Stream pepper spray are funny, as is the Pepper Spraying Cop Tumblr feed.

But we have a real problem here. It’s not the pepper spray that makes me want to cry, it’s how mutually-reinforcing up a set of interlocking systems have become. It’s the police thinking they can arrest peaceful people for protesting, or for taking video of them It’s a court system that’s turned “deference” into a spineless art, even when it’s Supreme Court justices getting shoved aside in their role as legal observers. It’s a political system where we can’t even agree to ban the TSA, or work out a non-arbitrary deal on cutting spending. It’s a set of corporatist best practices that allow the system to keep on churning along despite widespread revulsion.

So what do we do about it? Civil comments welcome. Venting welcome. Just keep it civil with respect to other commenters.

Image: Pike Floyd, by Kosso K

Emergent Effects of Restrictions on Teenage Drivers

For more than a decade, California and other states have kept their newest teen drivers on a tight leash, restricting the hours when they can get behind the wheel and whom they can bring along as passengers. Public officials were confident that their get-tough policies were saving lives.

Now, though, a nationwide analysis of crash data suggests that the restrictions may have backfired: While the number of fatal crashes among 16- and 17-year-old drivers has fallen, deadly accidents among 18-to-19-year-olds have risen by an almost equal amount. In effect, experts say, the programs that dole out driving privileges in stages, however well-intentioned, have merely shifted the ranks of inexperienced drivers from younger to older teens.

“The unintended consequences of these laws have not been well-examined,” said Mike Males, a senior researcher at the Center on Juvenile and Criminal Justice in San Francisco, who was not involved in the study, published in Wednesday’s edition of the Journal of the American Medical Assn. “It’s a pretty compelling study.” (“Teen driver restrictions a mixed bag“)

As Princess Leia once said, “The more you tighten your grip, the more teenagers will slip through your fingers.”

Heaven Forbid the New York Times include Atheists

In “Is Your Religion Your Financial Destiny?,” the New York Times presents the following chart of income versus religion:
NYTimes Religion Income Graph

Note that it doesn’t include the non-religious, which one might think an interesting group as a control. Now, you might think that’s because the non-religious aren’t in the data set. But you’d be wrong. In the data set are atheists, agnostics and “nothing in particular.” That last includes 6.3% of the population as “secular unaffiliated” and another 5.8% as “religious unaffiliated.” Now, 6.3% is more than all non-Christian religions combined. Many of those non-Christian religions are shown in the graphic. Athiest, at 1.6%, is almost as large as Jewish, a major focus of the article, and 4 times larger than Hindus.

Now, you might also argue that athiests were left out because there were too few in the sample (as opposed to demographic data.) But there were 439 athiests, and 251 reform Jews.

Chris Wyspoal pointed out that atheists land after Hindus and Jews for 75k+ incomes.

All the news that’s fit to print, indeed.

Egypt and Information Security

Yesterday, I said on Twitter that “If you work in information security, what’s happening in Egypt is a trove of metaphors and lessons for your work. Please pay attention.” My goal is not to say that what’s happening in Egypt is about information security, but rather to say that we can be both professional and engaged with the historic events going on there. Further, I think it’s important to be engaged.

A number of folks challenged me, for example, “Care to enumerate some of those lessons? The big ones I see are risks of centralized bandwidth control, lack of redundant connections.”

There’s a number of ways that information security professionals can engage with what’s happening.

A first is to use what’s happening to engage on security issues with their co-workers and management on issues like employee safety, disaster recovery and communications redundancy and security. This level of engagement is easy, it’s not political, but it uses a story in the news to open important discussions.

A second way is to use Egypt as a source of what-if scenarios to test those sorts of plans and issues. This gives strong work justification to tracking and understanding what’s happening in Egypt in detail.

A third way is to use Egypt as a way to open discussions of how our technologies can be used in ways which we don’t intend. Often times, security technologies overlap with the ability to impose control on communications. Sometimes, for example with Tor, they can be used to protect people. Other times, they can be used to cut off communications. These are difficult conversations, fraught with emotion and exposing our deep values. But they are difficult because they are important and meaningful. Oftentimes, we as technologists want to focus in on the technology, and leave the societal impact to others. I think Egypt offers us an opportunity to which we can rise, and a lens for us to engage with these questions in the technologies we build or operate.

There’s probably other ways as well, and I’d love to hear how others are engaging.

TSA News roundup

Finally some humor from Lucas Cantor:

abitmuch.jpg


and another:

tsa-touch-their-balls.jpg

The TSA’s Approach to Threat Modeling

“I understand people’s frustrations, and what I’ve said to the TSA is that you have to constantly refine and measure whether what we’re doing is the only way to assure the American people’s safety. And you also have to think through are there other ways of doing it that are less intrusive,” Obama said.

“But at this point, TSA in consultation with counterterrorism experts have indicated to me that the procedures that they have been putting in place are the only ones right now that they consider to be effective against the kind of threat that we saw in the Christmas Day bombing.” (“Obama: TSA pat-downs frustrating but necessary“)

I’ve spent the last several years developing tools, techniques, methodologies and processes for software threat modeling. I’ve taught thousands of people more effective ways to threat model. I’ve released tools for threat modeling, and even a game to help people learn to threat model. (I should note here that I am not speaking for my employer, and I’m now focused on other problems at work.) However, while I worked on software threat modeling, not terror threat modeling, the President’s statement concerns me. Normally, he’s a precise speaker, and so when he says “effective against the kind of threat that we saw in the Christmas Day bombing,” I worry.

In particular, the statement betrays a horrific backwards bias. The right question to ask is “will this mitigation protect the system against the attack and predictable improvements?” The answer is obviously “no.” TSA has smart people working there, why are they letting that be the headline question?

The problems are obvious. For example, in a Flyertalk thread, Connie asks: “If drug mules swallow drugs and fly, can’t terrorists swallow explosive devices?” and see also “New threat to travellers from al-Qaeda ‘keister bomb’.”

Half of getting the right answer is asking the right questions. If the question the President is hearing is “what can we do to protect against the threat that we saw in the Christmas day bombing (attempt)” then there are three possible interpretations. First is that the right question is being asked at a technical level, and the wrong question is being asked at the top. Second, the wrong questions are being asked up and down the line. Third is that the wrong question is being asked at the top, but it’s the right question for a TSA Administrator who wants to be able to testify before Congress that “everything possible was done.”

I’ve said before and I’ll say again, there are lots of possible approaches to threat modeling, and they all involve tradeoffs. I’ve commented that much of the problem is the unmeetable demands TSA labors under, and suggested fixes. If TSA is trading planned responses to Congress for effective security, I think Congress ought to be asking better questions. I’ll suggest “how do you model future threats?” as an excellent place to start.

Continuing on from there, an effective systematic approach would involve diagramming the air transport system, and ensuring that everyone and everything who gets to the plane without being authorized to be on the flight deck goes through reasonable and minimal searches under the Constitution, which are used solely for flight security. Right now, there’s discrepancies in catering and other servicing of the planes, there’s issues with cargo screening, etc.

These issues are getting exposed by the red teaming which happens, but that doesn’t lead to a systematic set of balanced defenses.

As long as the President is asking “Is this effective against the kind of threat that we saw in the Christmas Day bombing?” we’ll know that the right threat models aren’t making it to the top.

The 1st Software And Usable Security Aligned for Good Engineering (SAUSAGE) Workshop

National Institute of Standards and Technology
Gaithersburg, MD USA
April 5-6, 2011

Call for Participation

The field of usable security has gained significant traction in recent years, evidenced by the annual presentation of usability papers at the top security conferences, and security papers at the top human-computer interaction (HCI) conferences. Evidence is growing that significant security vulnerabilities are often caused by security designers’ failure to account for human factors. Despite growing attention to the issue, these problems are likely to continue until the underlying development processes address usable security.

See http://www.thei3p.org/events/sausage2011.html for more details.