Phishing and Clearances

Apparently, the CISO of US Homeland Security, a Paul Beckman, said that:

“Someone who fails every single phishing campaign in the world should not be holding a TS SCI [top secret, sensitive compartmentalized information—the highest level of security clearance] with the federal government” (Paul Beckman, quoted in Ars technica)

Now, I’m sure being in the government and trying to defend against phishing attacks is a hard problem, and I don’t want to ignore that real frustration. At the same time, GAO found that the government is having trouble hiring cybersecurity experts, and that was before the SF-86 leak.

Removing people’s clearances is one repsonse. It’s not clear from the text if these are phishing (strictly defined, an attempt to get usernames and passwords), or malware attached to the emails.

In each case, there are other fixes. The first would be multi-factor authentication for government logins. This was the subject of a push, and if agencies aren’t at 100%, maybe getting there is better than punitive action. Another fix could be to use an email client which makes seeing phishing emails easier. For example, an email client could display the RFC-822 sender address (eg, “<>” for any email address that that email client hasn’t sent email to, rather than the friendly text. They could provide password management software with built-in anti-phishing (checking the domain before submitting the password. They could, I presume, do other things which minimize the request on the human being.

When Rob Reeder, Ellen Cram Kowalczyk and I created the “NEAT” guidance for usable security, we didn’t make “Necessary” first just because the acronym is neater that way, we put it first because the poor person is usually overwhelmed, and they deserve to have software make the decisions that software can make. Angela Sasse called this the ‘compliance budget,’ and it’s not a departmental budget, it’s a human one. My understanding is that those who work for the government already have enough things drawing on that budget. Making people anxious that they’ll lose their clearance and have to take a higher-paying private sector job should not be one of them.

Towards a model of web browser security

One of the values of models is they can help us engage in areas where otherwise the detail is overwhelming. For example, C is a model of how a CPU works that allows engineers to defer certain details to the compiler, rather than writing in assembler. It empowers software developers to write for many CPU architectures at once. Many security flaws happen in areas the models simplify. For example, what if the stack grew away from the stack pointer, rather than towards it? The layout of the stack is a detail that is modeled away.

Information security is a broad industry, requiring and rewarding specialization. We often see intense specialization, which can naturally result in building expertise in silos, and tribal separation of knowledge create more gaps. At the same time, there is a stereotype that generalists end up focused on policy, or “risk management” where a lack of technical depth can hide. (That’s not to say that all risk managers are generalists or that there’s not real technical depth to some of them.)

If we want to enable more security generalists, and we want those generalists to remain grounded, we need to make it easier to learn about new areas. Part of that is good models, part of that is good exercises that appropriately balance challenge to skill level, part of that is the availability of mentoring, and I’m sure there are other parts I’m missing.

I enjoyed many things about Michael Zalewski’s book “The Tangled Web.” One thing I wanted was a better way to keep track of who attacks whom, to help me contextualize and remember the attacks. But such a model is not trivial to create. This morning, motivated by a conversation between Trey Ford and Chris Rohlf, I decided to take a stab at drafting a model for thinking about where the trust boundaries exist.

The words which open my threat modeling book are “all models are wrong, some models are useful.” I would appreciate feedback on this model. What’s missing, and why does it matter? What attacks require showing a new element in this software model?

Browser security
[Update 1 — please leave comments here, not on Twitter]

  1. Fabio Cerullo suggests the layout engine. It’s not clear to me what additional threats can be seen if you add this explicitly, perhaps because I’m not an expert.
  2. Fernando Montenegro asks about network services such as DNS, which I’m adding and also about shared trust (CA Certs), which overlap with a question about supply chain from Mayer Sharma.
  3. Chris Rohlf points out the “web browser protection profile.

I could be convinced otherwise, but think that the supply chain is best addressed by a separate model. Having a secure installation and update mechanism is an important mitigation of many types of bugs, but this model is for thinking about the boundaries between the components.

In reviewing the protection profile, it mentions the following threats:

Threat Comment
Malicious updates Out of scope (supply chain)
Malicious/flawed add on Out of scope (supply chain)
Network eavesdropping/attack Not showing all the data flows for simplicity (is this the right call?)
Data access Local storage is shown

Also, the protection profile is 88 pages long, and hard to engage with. While it provides far more detail and allows me to cross-check the software model, it doesn’t help me think about interactions between components.

End update 1]

On Language

I was irked to see a tweet “Learned a new word! Pseudoarboricity: the number of pseudoforests needed to cover a graph. Yes, it is actually a word and so is pseudoforest.” The idea that some letter combinations are “actual words” implies that others are “not actual words,” and thus, that there is some authority who may tell me what letter combinations I am allowed to use or understand.

Balderdash. Adorkable balderdash, but balderdash nonetheless.

As any student of Orwell shall recall, the test of language is its comprehensibility, not its adhesion to some standard. As an author, I sometimes hear from people who believe themselves to be authorities, or who believe that they may select for me authorities as to the meanings of words, and who wish to tell me that my use of the word “threat” threatens their understanding, that the preface’s explicit discussion of the many plain meanings of the word is insufficient, or that my sentences are too long, comma-filled, dash deficient or otherwise Oxfordless in a way which seems to cause them to feel superior to me in a way they wish to, at some length, convey.

In fact, on occasion, they are irked. I recommend to them, and to you, “You Are What You Speak.”

I wish them the best, and fall back, if you’ll so allow, to a comment from another master of language, speaking through one of his characters:

‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’
‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’
‘The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.’

Conference Etiquette: What’s New?

So Bill Brenner has a great article on “How to survive security conferences: 4 tips for the socially anxious
.” I’d like to stand by my 2010 guide to “Black Hat Best Practices,” and augment it with something new: a word on etiquette.

Etiquette is not about what fork you use (start from the outside, work in), or an excuse to make you uncomfortable because you forgot to call the Duke “Your Grace.” It’s a system of tools to help otherwise awkward social interactions go more smoothly.

We all meet a lot of people at these conferences, and there’s some truth behind the stereotype that people in technology are bad at “the people skills.” Sometimes, when we see someone, there will be recognition, but the name and full context doesn’t come rushing back. That’s an awkward moment, and it’s worth thinking about the etiquette involved.

When you know you’ve met someone and can’t recall the details, it’s rude to say “remind me who you are,” and so people will do a bunch of things to politely encourage reminders. For example, they’ll say “what’s new” or “what have you been working on lately?” Answers like “nothing new” or “same old stuff” are not helpful to the person who asked. This is an invitation to talk about your work. Even if you haven’t done anything new that’s ready to talk about, you can say something like “I’m still exploring the implications of the work I did on X” or “I’ve wrapped up my project on Y, and I’m looking for a new thing to go frozzle.” If all your work is secret, you can say “Oh, still at DoD, doing stuff for Uncle Sam.”

Whatever your answer will be, it should include something to help people remember who you are.

Why not give it a try this RSA?

BTW, you can get the best list of RSA parties where you can yell your answers to such questions at “RSA Parties Calendar.”

IOS Subject Key Identifier?

I’m having a problem where the “key identifier” displayed on my ios device does not match the key fingerprint on my server. In particular, I run:

% openssl x509 -in keyfile.pem -fingerprint -sha1

and I get a 20 byte hash. I also have a 20 byte hash in my phone, but it is not that hash value. I am left wondering if this is a crypto usability fail, or an attack.

Should I expect the output of that openssl invocation to match certificate details on IOS, or is that a different hash? What options to openssl should produce the result I see on my phone?

[update: it also does not match the output or a trivial subset of the output of

% openssl x509 -in keyfile.pem -fingerprint -sha256

% openssl x509 -in keyfile.pem -fingerprint -sha512


[Update 2: iOS displays the “X509v3 Subject Key Identifier”, and you can ask openssl for that via -text, eg, openssl x509 -in pubkey.pem -text. Thanks to Ryan Sleevi for pointing me down that path.]

Think Like An Attacker? Flip that advice!

For many years, I have been saying that “think like an attacker” is bad advice for most people. For example:

Here’s what’s wrong with think like an attacker: most people have no clue how to do it. They don’t know what matters to an attacker. They don’t know how an attacker spends their day. They don’t know how an attacker approaches a problem. Telling people to think like an attacker isn’t prescriptive or clear.

And I’ve been challenging people to think like a professional chef to help them understand why it’s not useful advice. But now, I’ve been one-upped, and, depending on audience, I have a new line to use.

Last week, on Veracode’s blog, Pete Chestna provides the perfect flip of “think like an attacker” to re-frame problems for security people. It’s “think like a developer.” If you, oh great security guru, cannot think like a developer, for heavens sake, stop asking developers to think like attackers.

What to do for randomness today?

In light of recent news, such as “FreeBSD washing Intel-chip randomness” and “alleged NSA-RSA scheming,” what advice should we give engineers who want to use randomness in their designs?

My advice for software engineers building things used to be to rely on the OS to get it right. That defers the problem to a small number of smart people. Is that still the right advice, despite recent news? The right advice is pretty clearly not that a normal software engineer building in Ruby on Rails or should go and roll their own. It also cannot be that they spend days wading through debates. Experts ought to be providing guidance on what to do.

Is the right thing to hash together the OS and something else? If so, precisely what something else?

A Quintet of Facebook Privacy Stories

It’s common to hear that Facebook use means that privacy is over, or no longer matters. I think that perception is deeply wrong. It’s based in the superficial notion that people making different or perhaps surprising privacy tradeoffs are never aware of what they’re doing, or that they have no regrets.

Some recent stories that I think come together to tell a meta-story of privacy:

  • Steven Levy tweeted: “What surprised me most in my Zuck interview: he says the thing most on rise is ‘sharing with smaller groups.'” (Tweet edited from 140-speak). I think that sharing with smaller groups is a pretty clear expression that privacy matters to Facebook users, and that as Facebook becomes more a part of people’s lives, the way they use it will continue to mature. For example, it turns out:
  • 71% of Facebook Users Engage in ‘Self-Censorship’” did a study of people typing into the Facebook status box, and not hitting post. In part this may be because people are ‘internalizing the policeman’ that Facebook imposes:
  • Facebook’s Online Speech Rules Keep Users On A Tight Leash.” This isn’t directly a privacy story, but one important facet of privacy is our ability to explore unpopular ideas. If our ability to do so in the forum in which people talk to each other is inhibited by private contract and opaque rules, then our ability to explore and grow in the privacy which Facebook affords to conversations is inhibited.
  • Om Malik: “Why Facebook Home bothers me: It destroys any notion of privacy” An interesting perspective, but Facebook users still care about privacy, but will have trouble articulating how or taking action to preserve the values of privacy they care about.

The Psychology of Password Managers

As I think more about the way people are likely to use a password manager, I think there’s real problems with the way master passwords are set up. As I write this, I’m deeply aware that I’m risking going into a space of “it’s logical that” without proper evidence.

Let’s start from the way most people will likely come to a password manager. They’ll be in an exploratory mood, and while they may select a good password, they may also select a simple one that’s easy to remember. That password, initially, will not be protecting very much, and so people may be tempted to pick one that’s ‘appropriate’ for what’s being protected.

Over time, the danger is that they will not think to update that password and improve it, but their trust in the password manager will increase. As their trust increases, the number of passwords that they’re protecting with a weak master password may also increase.

Now we get to changing the master password. Assuming that people can find it, how often will someone wake up and say “hey, I should change my master password?” Changing a master password is also scary. Now that I’ve accumulated hundreds of passwords, what happens if I forget my new password? (As it turns out, 1Password makes weekly backups of my password file, but I wasn’t aware of that. Also, what happens to the old files if I change my master password? Am I now exposed for both? That’s ok in the case that I’m changing out of caution, less ok if I’m changing because I think my master was exposed.)

Perhaps there’s room for two features here: first, that on password change, people could choose to have either master password unlock things. (Encrypt the master key with keys derived from both the old & new masters. This is no less secure than having backups available, and may address a key element of psychological acceptability.) You’d have to communicate that this will work, and let people choose. User testing that text would be fascinating.

A second feature might be to let people know how long they’ve been using the same master password, and gently encourage them to change it. This one is tricky mostly because I have no idea if it’s a good idea. Should you pick one super-strong master and use it for decades? Is there value to changing it now and again? Where could we seek evidence with which to test our instincts? What happens to long term memory as people age? Does muscle memory cause people to revert their passwords? (I know I’ve done it.) We could use a pattern like the gold bar to unobtrusively prompt.

A last element that might improve the way people use master passwords would be better browser integration. Having just gone to check, I was surprised how many sites my browser is tracking. Almost all of them were low value, and all of them now are. But why do we have two places that can store this, especially when one is less secure than the other. A browser API that allows a password manager to say “I’ve got this one” would be a welcome improvement.

Studying these ideas and seeing which ones are invalidated by data gathering would be cool. Talking to people about how they use their password managers would also be interesting work. As Bonneau has show, the quest to replace passwords is going to be arduous. Learning how to better live with what we have seems useful.

1Password & Hashcat

The folks at Hashcat have some interesting observations about 1Password. The folks at 1Password have a response, and I think there’s all sorts of fascinating lessons here.

The crypto conversations are interesting, but at the end of the day, a lot of security is unavoidably contributed by the master password strength. I’d like to offer up a simple contribution. Agilebits should make two non-cryptographic changes in addition to any crypto changes.

These relate to the human end of the issue, and how real humans make decisions. That is, picking a master password is a one time event, and even if there’s a strength meter, factors of memorability, typability, etc all come into play when the user selects a password when first installing 1Password.

Those human factors are not good for security, but I think they’re addressable.

First, the master password entry screens should display the same password strength meter that’s displayed everywhere else. It’s all well and good to discuss in a blog post that people need strong master passwords, but the software should give regular feedback about the strength of that master password. Displaying a strength meter each time it’s entered creates some small risk of information disclosure via shoulder-surfing, and adds pressure to make it stronger.

Second, they should make it easier to change the master password. I looked around, couldn’t figure out how to do so in a few minutes. [Update: It’s in preferences, security. I thought I’d looked there, may have missed it.]


If master passwords are so important, then it’s important for the software to help its customers get them right.

There’s an interesting link here to “Why Johnny Can’t Encrypt.” In that 1999 paper, Whitten and Tygar made the point that all the great crypto in PGP couldn’t protect its users if they didn’t make the right decisions, and making those decisions is hard.

In this case, the security of password vaults depends not only on the crypto, but also on the user interface. Figuring out the mental models that people have around password storage tools, and how the interface choices those tools make develop those mental models is an important area, and deserves lots of careful attention.