The Unexpected Meanings of Facebook Privacy Disclaimers

Paul Gowder has an interesting post over at Prawfblog, “In Defense of Facebook Copyright Disclaimer Status Updates (!!!).” He presents the facts:

…People then decide that, hey, goose, gander, if Facebook can unilaterally change the terms of our agreement by presenting new ones where, theoretically, a user might see them, then a user can unilaterally change the terms of our agreement by presenting new ones where, theoretically, some responsible party in Facebook might see them. Accordingly, they post Facebook statuses declaring that they reserve all kinds of rights in the content they post to Facebook, and expressly denying that Facebook acquires any rights to that content by virtue of that posting.

Before commenting on his analysis, which is worth reading in full, there’s an important takeaway, which is that even on Facebook, and even with Facebook’s investment in making their privacy controls more usable, people want more privacy while they’re using Facebook. Is that everyone? No, but it’s enough for the phenomenon of people posting these notices to get noticed.

His analysis instead goes to what we can learn about how people see the law:

To the contrary, I think the Facebook status-updaters reflect both cause for hope and cause for worry about our legal system. The cause for worry is that the system does seem to present itself as magic words. The Facebook status updates, like the protests of the sovereign citizens (but much more mainstream), seem to me to reflect a serious alienation of the public from the law, in which the law isn’t rational, or a reflection of our collective values and ideas about how we ought to treat one another and organize our civic life. Instead, it’s weaponized ritual, a set of pieces of magic paper or bits on a computer screen, administered by a captured priesthood, which the powerful can use to exercise that power over others. With mere words, unhinged from any semblance of autonomy or agreement, Facebook can (the status-updaters perceive) whisk away your property and your private information. This is of a kind with the sort of alienation that I worried about over the last few posts, but in the civil rather than the criminal context: the perception that the law is something done to one, rather than something one does with others as an autonomous agent as well as a democratic citizen. Whether this appears in the form of one-sided boilerplate contracts or petty police harassment, it’s still potentially alienating, and, for that reason, troubling.

This is spot-on. Let me extend it. These “weaponized rituals” are not just at the level of the law. Our institutions are developing anti-bodies to unscripted or difficult to categorize human participation, because engaging with human participation is expensive to deliver and inconvenient to the organization. We see this in the increasingly ritualized engagement with the courts. Despite regular attempts to make courts operate in plain English, it becomes a headline when “Prisoner wins Supreme Court case after submitting handwritten petition.” (Yes, the guy’s apparently otherwise a jerk, serving a life sentence.) Comments to government agencies are now expected to follow a form (and regular commenters learn to follow it, lest their comments engage the organizational anti-bodies on procedural grounds). When John Oliver suggested writing to the FCC, its systems crashed and they had to extend the deadline. Submitting Freedom of Information requests to governments, originally meant to increase transparency and engagement, has become so scripted that there are web sites to track your requests and departmental failures to comply with the statuatory timelines. We have come to accept that our legislators and regulators are looking out for themselves, and no longer ask them to focus on societal good. We are pleasantly surprised when they pay more than lip service to anything beyond their agency’s remit. In such a world, is it any surprise that most people don’t bother to vote?

Such problems are not limited to the law. We no longer talk to the man in the gray flannel suit, we talk to someone reading from a script he wrote. Our interactions with organizations are fenceposted by vague references to “policy.” Telephone script-readers are so irksome to deal with that we all put off making calls, because we know that even asking for a supervisor barely helps. (This underlies why rage-tweeting can actually help cut red tape; it summons a different department to try to work your way through a problem created by intra-organizational shuffling of costs.) Sometimes the references to policy are not vague, but precise, and the precision itself is a cost-shifting ritual. By demanding a form that’s convenient to itself, an organization can simultaneously call for engagement while making that engagement expensive and frustrating. When engaging requires understanding the the system as well as those who are immersed in it, engagement is discouraged. We can see this at Wikipedia, for example, discussed in a blog post like “The Closed, Unfriendly World of Wikipedia.” Wikipedia has evolved a system for managing disputes, and that system is ritualized. Danny Sullivan doesn’t understand why they want him to jump through hoops and express himself in the way that makes it easy for them to process.

Such ritualized forms of engagement display commitment to the organization. This can inform our understanding of how social engineers work. Much of their success at impersonating employees comes from being fluid in the use of a victim’s jargon, and in the 90s, much of what was published in 2600 was lists of Ma Bell’s acronyms or descriptions of operating procedures. People believe that only an employee would bother to learn such things, and so learning such things acts as an authenticator in ways that infuriate technical system designers.

What Gowder calls rituals can also be viewed as protocols (or protocol messages). They are the formalized, algorithm friendly, state-machine altering messages, and thus we’ll see more of them.

Such growth makes systems brittle, as they focus on processing those messages and not others. Brittle systems break in chaotic and often ugly ways.

So let me leave this with a question: how can we design systems which scale without becoming brittle, and also allow for empathy?

The Psychology of Password Managers

As I think more about the way people are likely to use a password manager, I think there’s real problems with the way master passwords are set up. As I write this, I’m deeply aware that I’m risking going into a space of “it’s logical that” without proper evidence.

Let’s start from the way most people will likely come to a password manager. They’ll be in an exploratory mood, and while they may select a good password, they may also select a simple one that’s easy to remember. That password, initially, will not be protecting very much, and so people may be tempted to pick one that’s ‘appropriate’ for what’s being protected.

Over time, the danger is that they will not think to update that password and improve it, but their trust in the password manager will increase. As their trust increases, the number of passwords that they’re protecting with a weak master password may also increase.

Now we get to changing the master password. Assuming that people can find it, how often will someone wake up and say “hey, I should change my master password?” Changing a master password is also scary. Now that I’ve accumulated hundreds of passwords, what happens if I forget my new password? (As it turns out, 1Password makes weekly backups of my password file, but I wasn’t aware of that. Also, what happens to the old files if I change my master password? Am I now exposed for both? That’s ok in the case that I’m changing out of caution, less ok if I’m changing because I think my master was exposed.)

Perhaps there’s room for two features here: first, that on password change, people could choose to have either master password unlock things. (Encrypt the master key with keys derived from both the old & new masters. This is no less secure than having backups available, and may address a key element of psychological acceptability.) You’d have to communicate that this will work, and let people choose. User testing that text would be fascinating.

A second feature might be to let people know how long they’ve been using the same master password, and gently encourage them to change it. This one is tricky mostly because I have no idea if it’s a good idea. Should you pick one super-strong master and use it for decades? Is there value to changing it now and again? Where could we seek evidence with which to test our instincts? What happens to long term memory as people age? Does muscle memory cause people to revert their passwords? (I know I’ve done it.) We could use a pattern like the gold bar to unobtrusively prompt.

A last element that might improve the way people use master passwords would be better browser integration. Having just gone to check, I was surprised how many sites my browser is tracking. Almost all of them were low value, and all of them now are. But why do we have two places that can store this, especially when one is less secure than the other. A browser API that allows a password manager to say “I’ve got this one” would be a welcome improvement.

Studying these ideas and seeing which ones are invalidated by data gathering would be cool. Talking to people about how they use their password managers would also be interesting work. As Bonneau has show, the quest to replace passwords is going to be arduous. Learning how to better live with what we have seems useful.

Gamifying Driving

P90115441 highRes 640x419

…the new points system rates the driver’s ability to pilot the MINI with a sporty yet steady hand. Praise is given to particularly sprightly sprints, precise gear changes, controlled braking, smooth cornering and U-turns executed at well-judged speeds. For example, the system awards maximum Experience Points for upshifts carried out within the ideal rev range and in less than 1.2 seconds. Super-slick gear changes prompt a “Perfect change up” message on the on-board monitor, while a “Breathtaking U-turn” and a masterful touch with the anchors (“Well-balanced braking”) are similarly recognised with top marks and positive, MINI-style feedback.

For more, see “MINI Connected Adds Driving Excitement Analyser.”

Now, driving is the most dangerous thing most of us do on a regular basis. Most Americans don’t get any supplemental driving instruction after they turn 17. So maybe there’s actually something to be said for a system that incents people to drive better.

I can’t see any possible issues with a game pushing people towards things that are undesirable in the real world. I mean, I’m sure that before suggesting a U-turn, the game will use the car’s adaptive cruise control radar to see what’s around, even if the car doesn’t have one.

Guns, Homicides and Data

I came across a fascinating post at Jon Udell’s blog, “Homicide rates in context ,” which starts out with this graph of 2007 data:

A map showing gun ownership and homicide rates, and which look very different

Jon’s post says more than I care to on this subject right now, and points out questions worth asking.

As I said in my post on “Thoughts on the Tragedies of December 14th,” “those who say that easy availability of guns drives murder rates must do better than simply cherry picking data.”

I’m not sure I believe that the “more guns, less crime” claim made by A.W.R. Hawkins claim is as causative as it sounds, but the map presents a real challenge to simplistic responses to tragic gun violence.

Privacy and Health Care

In my post on gun control and schools, I asserted that “I worry that reducing privacy around mental health care is going to deter people who need health care from getting it.”

However, I didn’t offer up any evidence for that claim. So I’d like to follow up with some details from a report that talks about this in great detail, “The Case for Informed Consent” by Patient Privacy Rights.

So let me quote two related numbers from that report.

First, between 13 and 17% of Americans admit in surveys to hiding health information in the current system. That’s probably a lower-bound, as we can expect some of the privacy sensitive population will decline to be surveyed, and some fraction of those who are surveyed may hide their information hiding. (It’s information-hiding all the way down.)

Secondly, 1 in 8 Americans (12.5%) put their health at risk because of privacy concerns, including avoiding their regular doctor, asking their doctor to record a different diagnosis, or avoiding tests.

I’ll also note that these numbers relate to general health care, and the numbers may be higher for often-stigmatized mental health issues.

Proof of Age in UK Pilot

There’s a really interesting article by Toby Stevens at Computer Weekly, “Proof of age comes of age:”

It’s therefore been fascinating to be part of a new initiative that seeks to address proof of age using a Privacy by Design approach to biometric technologies. Touch2id is an anonymous proof of age system that uses fingerprint biometrics and NFC to allow young people to prove that they are 18 years or over at licensed premises (e.g. bars, clubs).

The principle is simple: a young person brings their proof of age document (Home Office rules stipulate this must be a passport or driving licence) to a participating Post Office branch. The Post Office staff member checks document using a scanner, and confirms that the young person is the bearer. They then capture a fingerprint from the customer, which is converted into a hash and used to encrypt the customer’s date of birth on a small NFC sticker, which can be affixed to the back of a phone or wallet. No personal record of the customer’s details, document or fingerprint is retained either on the touch2id enrolment system or in the NFC sticker – the service is completely anonymous.

So first, I’m excited to see this. I think single-purpose credentials are important.

Second, I have a couple of technical questions.

  • Why a fingerprint versus a photo? People are good at recognizing photos, and a photo is a less intrusive mechanism than a fingerprint. Is the security gain sufficient to justify that? What’s the quantified improvement in accuracy?
  • Is NFC actually anonymous? It seems to me that NFC likely has a chip ID or something similar, meaning that the system is pseudonymous

I don’t mean to try to allow the best to be the enemy of the good. Not requiring ID for drinking is an excellent way to secure the ID system. See for example, my BlackHat 2003 talk. But I think that support can be both rah-rah and a careful critique of what we’re building.

Shocking News of the Day: Social Security Numbers Suck

The firm’s annual Banking Identity Safety Scorecard looked at the consumer-security practices of 25 large banks and credit unions. It found that far too many still rely on customers’ Social Security numbers for authentication purposes — for instance, to verify a customer’s identity when he or she wants to speak to a bank representative over the telephone or re-set a password.

All banks in the report used some version of the Social Security number as a means of authenticating the customer, Javelin found. The pervasive use of Social Security numbers was surprising, given the importance of Social Security numbers as a tool for identity theft, said Phil Blank, managing director of security, risk and fraud at Javelin. (“Banks Rely Too Heavily On Social Security Numbers, Report Finds“, Ann Carrns, New York Times)

Previously here: “Social Security Numbers are Worthless as Authenticators” (2009), or “Bad advice on SSNs” (2005).

The output of a threat modeling session, or the creature from the bug lagoon

Wendy Nather has continued the twitter conversation which is now a set of blog posts. (My comments are threat modeling and risk assessment, and hers: “That’s not a bug, it’s a creature. “)

I think we agree on most things, but I sense a little semantic disconnect in some things that he says:

The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.

I consider the word “bug” to refer to an error or unintended functionality in the existing code, not a potential vulnerability in what is (hopefully) still a theoretical design. So if you’re doing whiteboard threat modeling, the output should be “things not to do going forward.”

As a result, you’re stuck with something to mitigate, probably by putting in extra security controls that you otherwise wouldn’t have needed. I consider this a to-do list, not a bug list.
(“That’s not a bug, it’s a creature. “, Wendy Nather)

I don’t disagree here, but want to take it one step further. I see a list of “things not to do going forward” and a “todo list” as an excellent start for a set of tests to confirm that those things happen or don’t. So you file bugs, and those bugs get tracked and triaged and ideally closed as resolved or fixed when you have a test that confirms that they ain’t happening. If you want to call this something else, that’s fine–tracking and managing bugs can be too much work. The key to me is that the “things not to do” sink in, and to to-do list gets managed in some good way.

And again, I agree with her points about probability, and her point that it’s lurking in people’s minds is an excellent one, worth repeating:

the conversation with the project manager, business executives, and developers is always, always going to be about probability, even as a subtext. Even if they don’t come out and say, “But who would want to do that?” or “Come on, we’re not a bank or anything,” they’ll be thinking it when they estimate the cost of fixing the bug or putting in the mitigations.

I simply think the more you focus threat modeling on the “what will go wrong” question, the better. Of course, there’s an element of balance: you don’t usually want to be movie plotting or worrying about Chinese spies replacing the hard drive before you worry about the lack of authentication in your network connections.

Gävle Goat Gambit Goes Astray

Gavle Goat 2011
It’s a bit of a Christmas tradition here at Emergent Chaos to keep you informed about the Gävle Goat. Ok, technically, our traditions seem hit and miss, but whaddaya want from a site with Chaos in the name? You want precision, read a project management blog. Project management blogs probably set calendar reminders to kick off a plan with defined stakeholders, success metrics and milestones to ensure high quality blog posts. Us, we sometimes randomly remember.

But, but! This year, we actually have a plan with 8×10 color gannt charts with circles and arrows explaining how to set up a market to predict when the goat would burn.

We even have prizes.

Unfortunately, chaos (and flames) emerged, and the goat was burned before we set up the market.

You can read the full story of “Sweden’s Christmas goat succumbs to flames.”