The Psychology of Password Managers

As I think more about the way people are likely to use a password manager, I think there’s real problems with the way master passwords are set up. As I write this, I’m deeply aware that I’m risking going into a space of “it’s logical that” without proper evidence.

Let’s start from the way most people will likely come to a password manager. They’ll be in an exploratory mood, and while they may select a good password, they may also select a simple one that’s easy to remember. That password, initially, will not be protecting very much, and so people may be tempted to pick one that’s ‘appropriate’ for what’s being protected.

Over time, the danger is that they will not think to update that password and improve it, but their trust in the password manager will increase. As their trust increases, the number of passwords that they’re protecting with a weak master password may also increase.

Now we get to changing the master password. Assuming that people can find it, how often will someone wake up and say “hey, I should change my master password?” Changing a master password is also scary. Now that I’ve accumulated hundreds of passwords, what happens if I forget my new password? (As it turns out, 1Password makes weekly backups of my password file, but I wasn’t aware of that. Also, what happens to the old files if I change my master password? Am I now exposed for both? That’s ok in the case that I’m changing out of caution, less ok if I’m changing because I think my master was exposed.)

Perhaps there’s room for two features here: first, that on password change, people could choose to have either master password unlock things. (Encrypt the master key with keys derived from both the old & new masters. This is no less secure than having backups available, and may address a key element of psychological acceptability.) You’d have to communicate that this will work, and let people choose. User testing that text would be fascinating.

A second feature might be to let people know how long they’ve been using the same master password, and gently encourage them to change it. This one is tricky mostly because I have no idea if it’s a good idea. Should you pick one super-strong master and use it for decades? Is there value to changing it now and again? Where could we seek evidence with which to test our instincts? What happens to long term memory as people age? Does muscle memory cause people to revert their passwords? (I know I’ve done it.) We could use a pattern like the gold bar to unobtrusively prompt.

A last element that might improve the way people use master passwords would be better browser integration. Having just gone to check, I was surprised how many sites my browser is tracking. Almost all of them were low value, and all of them now are. But why do we have two places that can store this, especially when one is less secure than the other. A browser API that allows a password manager to say “I’ve got this one” would be a welcome improvement.

Studying these ideas and seeing which ones are invalidated by data gathering would be cool. Talking to people about how they use their password managers would also be interesting work. As Bonneau has show, the quest to replace passwords is going to be arduous. Learning how to better live with what we have seems useful.

Gamifying Driving

P90115441 highRes 640x419

…the new points system rates the driver’s ability to pilot the MINI with a sporty yet steady hand. Praise is given to particularly sprightly sprints, precise gear changes, controlled braking, smooth cornering and U-turns executed at well-judged speeds. For example, the system awards maximum Experience Points for upshifts carried out within the ideal rev range and in less than 1.2 seconds. Super-slick gear changes prompt a “Perfect change up” message on the on-board monitor, while a “Breathtaking U-turn” and a masterful touch with the anchors (“Well-balanced braking”) are similarly recognised with top marks and positive, MINI-style feedback.

For more, see “MINI Connected Adds Driving Excitement Analyser.”

Now, driving is the most dangerous thing most of us do on a regular basis. Most Americans don’t get any supplemental driving instruction after they turn 17. So maybe there’s actually something to be said for a system that incents people to drive better.

I can’t see any possible issues with a game pushing people towards things that are undesirable in the real world. I mean, I’m sure that before suggesting a U-turn, the game will use the car’s adaptive cruise control radar to see what’s around, even if the car doesn’t have one.

Guns, Homicides and Data

I came across a fascinating post at Jon Udell’s blog, “Homicide rates in context ,” which starts out with this graph of 2007 data:

A map showing gun ownership and homicide rates, and which look very different

Jon’s post says more than I care to on this subject right now, and points out questions worth asking.

As I said in my post on “Thoughts on the Tragedies of December 14th,” “those who say that easy availability of guns drives murder rates must do better than simply cherry picking data.”

I’m not sure I believe that the “more guns, less crime” claim made by A.W.R. Hawkins claim is as causative as it sounds, but the map presents a real challenge to simplistic responses to tragic gun violence.

Privacy and Health Care

In my post on gun control and schools, I asserted that “I worry that reducing privacy around mental health care is going to deter people who need health care from getting it.”

However, I didn’t offer up any evidence for that claim. So I’d like to follow up with some details from a report that talks about this in great detail, “The Case for Informed Consent” by Patient Privacy Rights.

So let me quote two related numbers from that report.

First, between 13 and 17% of Americans admit in surveys to hiding health information in the current system. That’s probably a lower-bound, as we can expect some of the privacy sensitive population will decline to be surveyed, and some fraction of those who are surveyed may hide their information hiding. (It’s information-hiding all the way down.)

Secondly, 1 in 8 Americans (12.5%) put their health at risk because of privacy concerns, including avoiding their regular doctor, asking their doctor to record a different diagnosis, or avoiding tests.

I’ll also note that these numbers relate to general health care, and the numbers may be higher for often-stigmatized mental health issues.

Proof of Age in UK Pilot

There’s a really interesting article by Toby Stevens at Computer Weekly, “Proof of age comes of age:”

It’s therefore been fascinating to be part of a new initiative that seeks to address proof of age using a Privacy by Design approach to biometric technologies. Touch2id is an anonymous proof of age system that uses fingerprint biometrics and NFC to allow young people to prove that they are 18 years or over at licensed premises (e.g. bars, clubs).

The principle is simple: a young person brings their proof of age document (Home Office rules stipulate this must be a passport or driving licence) to a participating Post Office branch. The Post Office staff member checks document using a scanner, and confirms that the young person is the bearer. They then capture a fingerprint from the customer, which is converted into a hash and used to encrypt the customer’s date of birth on a small NFC sticker, which can be affixed to the back of a phone or wallet. No personal record of the customer’s details, document or fingerprint is retained either on the touch2id enrolment system or in the NFC sticker – the service is completely anonymous.

So first, I’m excited to see this. I think single-purpose credentials are important.

Second, I have a couple of technical questions.

  • Why a fingerprint versus a photo? People are good at recognizing photos, and a photo is a less intrusive mechanism than a fingerprint. Is the security gain sufficient to justify that? What’s the quantified improvement in accuracy?
  • Is NFC actually anonymous? It seems to me that NFC likely has a chip ID or something similar, meaning that the system is pseudonymous

I don’t mean to try to allow the best to be the enemy of the good. Not requiring ID for drinking is an excellent way to secure the ID system. See for example, my BlackHat 2003 talk. But I think that support can be both rah-rah and a careful critique of what we’re building.

Shocking News of the Day: Social Security Numbers Suck

The firm’s annual Banking Identity Safety Scorecard looked at the consumer-security practices of 25 large banks and credit unions. It found that far too many still rely on customers’ Social Security numbers for authentication purposes — for instance, to verify a customer’s identity when he or she wants to speak to a bank representative over the telephone or re-set a password.

All banks in the report used some version of the Social Security number as a means of authenticating the customer, Javelin found. The pervasive use of Social Security numbers was surprising, given the importance of Social Security numbers as a tool for identity theft, said Phil Blank, managing director of security, risk and fraud at Javelin. (“Banks Rely Too Heavily On Social Security Numbers, Report Finds“, Ann Carrns, New York Times)

Previously here: “Social Security Numbers are Worthless as Authenticators” (2009), or “Bad advice on SSNs” (2005).

The output of a threat modeling session, or the creature from the bug lagoon

Wendy Nather has continued the twitter conversation which is now a set of blog posts. (My comments are threat modeling and risk assessment, and hers: “That’s not a bug, it’s a creature. “)

I think we agree on most things, but I sense a little semantic disconnect in some things that he says:

The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.

I consider the word “bug” to refer to an error or unintended functionality in the existing code, not a potential vulnerability in what is (hopefully) still a theoretical design. So if you’re doing whiteboard threat modeling, the output should be “things not to do going forward.”

As a result, you’re stuck with something to mitigate, probably by putting in extra security controls that you otherwise wouldn’t have needed. I consider this a to-do list, not a bug list.
(“That’s not a bug, it’s a creature. “, Wendy Nather)

I don’t disagree here, but want to take it one step further. I see a list of “things not to do going forward” and a “todo list” as an excellent start for a set of tests to confirm that those things happen or don’t. So you file bugs, and those bugs get tracked and triaged and ideally closed as resolved or fixed when you have a test that confirms that they ain’t happening. If you want to call this something else, that’s fine–tracking and managing bugs can be too much work. The key to me is that the “things not to do” sink in, and to to-do list gets managed in some good way.

And again, I agree with her points about probability, and her point that it’s lurking in people’s minds is an excellent one, worth repeating:

the conversation with the project manager, business executives, and developers is always, always going to be about probability, even as a subtext. Even if they don’t come out and say, “But who would want to do that?” or “Come on, we’re not a bank or anything,” they’ll be thinking it when they estimate the cost of fixing the bug or putting in the mitigations.

I simply think the more you focus threat modeling on the “what will go wrong” question, the better. Of course, there’s an element of balance: you don’t usually want to be movie plotting or worrying about Chinese spies replacing the hard drive before you worry about the lack of authentication in your network connections.

Gävle Goat Gambit Goes Astray

Gavle Goat 2011
It’s a bit of a Christmas tradition here at Emergent Chaos to keep you informed about the Gävle Goat. Ok, technically, our traditions seem hit and miss, but whaddaya want from a site with Chaos in the name? You want precision, read a project management blog. Project management blogs probably set calendar reminders to kick off a plan with defined stakeholders, success metrics and milestones to ensure high quality blog posts. Us, we sometimes randomly remember.

But, but! This year, we actually have a plan with 8×10 color gannt charts with circles and arrows explaining how to set up a market to predict when the goat would burn.

We even have prizes.

Unfortunately, chaos (and flames) emerged, and the goat was burned before we set up the market.

You can read the full story of “Sweden’s Christmas goat succumbs to flames.”

Emergent Effects of Restrictions on Teenage Drivers

For more than a decade, California and other states have kept their newest teen drivers on a tight leash, restricting the hours when they can get behind the wheel and whom they can bring along as passengers. Public officials were confident that their get-tough policies were saving lives.

Now, though, a nationwide analysis of crash data suggests that the restrictions may have backfired: While the number of fatal crashes among 16- and 17-year-old drivers has fallen, deadly accidents among 18-to-19-year-olds have risen by an almost equal amount. In effect, experts say, the programs that dole out driving privileges in stages, however well-intentioned, have merely shifted the ranks of inexperienced drivers from younger to older teens.

“The unintended consequences of these laws have not been well-examined,” said Mike Males, a senior researcher at the Center on Juvenile and Criminal Justice in San Francisco, who was not involved in the study, published in Wednesday’s edition of the Journal of the American Medical Assn. “It’s a pretty compelling study.” (“Teen driver restrictions a mixed bag“)

As Princess Leia once said, “The more you tighten your grip, the more teenagers will slip through your fingers.”