PETS is one of my favorite conferences of the year.
It’s common to hear that Facebook use means that privacy is over, or no longer matters. I think that perception is deeply wrong. It’s based in the superficial notion that people making different or perhaps surprising privacy tradeoffs are never aware of what they’re doing, or that they have no regrets.
Some recent stories that I think come together to tell a meta-story of privacy:
- Steven Levy tweeted: “What surprised me most in my Zuck interview: he says the thing most on rise is ‘sharing with smaller groups.’” (Tweet edited from 140-speak). I think that sharing with smaller groups is a pretty clear expression that privacy matters to Facebook users, and that as Facebook becomes more a part of people’s lives, the way they use it will continue to mature. For example, it turns out:
- “71% of Facebook Users Engage in ‘Self-Censorship’” did a study of people typing into the Facebook status box, and not hitting post. In part this may be because people are ‘internalizing the policeman’ that Facebook imposes:
- “Facebook’s Online Speech Rules Keep Users On A Tight Leash.” This isn’t directly a privacy story, but one important facet of privacy is our ability to explore unpopular ideas. If our ability to do so in the forum in which people talk to each other is inhibited by private contract and opaque rules, then our ability to explore and grow in the privacy which Facebook affords to conversations is inhibited.
- Om Malik: “Why Facebook Home bothers me: It destroys any notion of privacy” An interesting perspective, but Facebook users still care about privacy, but will have trouble articulating how or taking action to preserve the values of privacy they care about.
It would not be surprising if an article like “Firefox Cookie-Block Is The First Step Toward A Better Tomorrow” was written by a privacy advocate. And it may well have been. But this privacy advocate is also a former chairman of the Internet Advertising Bureau. (For their current position, see “Randall Rothenberg’s Statement Opposing Mozilla’s Intention to Block Third-Party Cookies.”
But quoting from “the first step:”
First, the current promise of ultra-targeted audiences delivered in massively efficient ways is proving to be one of the more empty statements in recent memory. Every day more data shows that what is supposed to be happening is far from reality. Ad impressions are not actually in view, targeting data is, on average, 50% inaccurate by some measures (even for characteristics like gender) and, all too often, the use of inferred targeting while solving for low-cost clicks produces cancerous placements for the marketer. At the end of the day, the three most important players in the ecosystem – the visitor, the content creator and the marketer – are all severely impaired, or even negatively impacted, by these practices.
It’s a quick read, and fascinating when you consider the source.
Facebook’s new Graph search is a fascinating product, and I want to use it. (In fact, I wanted to use it way back when I wrote about “Single Serving Friend” in 2005.)
Facebook’s Graph Search will incent Facebook users to “dress” themselves in better meta-data, so as to be properly represented in all those new structured results. People will start to update their profiles with more dates, photo tags, relationship statuses, and, and, and…you get the picture. No one wants to be left out of a consideration set, after all. (“Facebook is no longer flat“, John Battelle)
But privacy rears its predictable head, not just in the advocacy world:
Independent studies suggest that Facebook users are becoming more careful about how much they reveal online, especially since educators and employers typically scour Facebook profiles.
A Northwestern University survey of 500 young adults in the summer of 2012 found that the majority avoided posting status updates because they were concerned about who would see them. The study also found that many had deleted or blocked contacts from seeing their profiles and nearly two-thirds had untagged themselves from a photo, post or check-in. (“Search Option From Facebook Is a Privacy Test“, NYTimes)
Perhaps a small set of people will, as Batelle suggests, slow down their use of ironic, silly, or outraged likes, but the fundamental problem is that such uses are situated in a context, and when those contexts overlap, their meanings are harder to tease out with algorithms. People engage with systems like Yelp or LinkedIn in a much more constrained way, and in that constraint, make a much simpler set of meanings. But even in those simple meanings, ‘the street finds its own uses for things.’ For example, I get the idea that this 5-star review may be about something more than the design on a shirt.
There’s another study on “Facebook Fatigue:”
Bored or annoyed by Facebook? You’re not alone. A majority of people surveyed by the Pew Internet and American Life Project said they had taken sabbaticals from the social network at some point, to escape the drama, or the tedium. (“Study: Facebook fatigue — it’s real“, Jennifer Van Grove, CNet)
When our nuanced and evolved social systems are overlaid with technology, it’s intensely challenging to get the balance of technology and social right. I think the Pew research shows that Facebook has its work cut out for it.
It’s Data Privacy Day, and there may be a profusion of platitudes. But I think what we need on data privacy day are more tools to let people take control of their privacy. One way to do that is to check your privacy settings. Of course, the way settings are arranged changes over time, and checking your settings regularly is a drain.
PrivacyFix is a Firefox & Chrome plugin that you might want to check out. It looks at your Facebook and G+ settings, and helps you fix things. It also helps you send opt-out email to web site privacy addresses, which is awesome.
Not having a Facebook or G+ account, I can’t really test it. I do find the model of a plugin that works when you’re on their site (versus local UI) to be confusing. But maybe I’m not their target audience. Anyway, I did want to refer back to my Lessons from Facebook’s Stock Slide, in which I talked about intent versus identity.
I don’t know if PrivacyFix’s estimates of revenue are accurate. But unless they’re off by 2 orders of magnitude for each of Facebook (under-estimating) and Google (over-estimating), then wow.
In my post on gun control and schools, I asserted that “I worry that reducing privacy around mental health care is going to deter people who need health care from getting it.”
However, I didn’t offer up any evidence for that claim. So I’d like to follow up with some details from a report that talks about this in great detail, “The Case for Informed Consent” by Patient Privacy Rights.
So let me quote two related numbers from that report.
First, between 13 and 17% of Americans admit in surveys to hiding health information in the current system. That’s probably a lower-bound, as we can expect some of the privacy sensitive population will decline to be surveyed, and some fraction of those who are surveyed may hide their information hiding. (It’s information-hiding all the way down.)
Secondly, 1 in 8 Americans (12.5%) put their health at risk because of privacy concerns, including avoiding their regular doctor, asking their doctor to record a different diagnosis, or avoiding tests.
I’ll also note that these numbers relate to general health care, and the numbers may be higher for often-stigmatized mental health issues.
There’s a really interesting article by Toby Stevens at Computer Weekly, “Proof of age comes of age:”
It’s therefore been fascinating to be part of a new initiative that seeks to address proof of age using a Privacy by Design approach to biometric technologies. Touch2id is an anonymous proof of age system that uses fingerprint biometrics and NFC to allow young people to prove that they are 18 years or over at licensed premises (e.g. bars, clubs).
The principle is simple: a young person brings their proof of age document (Home Office rules stipulate this must be a passport or driving licence) to a participating Post Office branch. The Post Office staff member checks document using a scanner, and confirms that the young person is the bearer. They then capture a fingerprint from the customer, which is converted into a hash and used to encrypt the customer’s date of birth on a small NFC sticker, which can be affixed to the back of a phone or wallet. No personal record of the customer’s details, document or fingerprint is retained either on the touch2id enrolment system or in the NFC sticker – the service is completely anonymous.
So first, I’m excited to see this. I think single-purpose credentials are important.
Second, I have a couple of technical questions.
- Why a fingerprint versus a photo? People are good at recognizing photos, and a photo is a less intrusive mechanism than a fingerprint. Is the security gain sufficient to justify that? What’s the quantified improvement in accuracy?
- Is NFC actually anonymous? It seems to me that NFC likely has a chip ID or something similar, meaning that the system is pseudonymous
I don’t mean to try to allow the best to be the enemy of the good. Not requiring ID for drinking is an excellent way to secure the ID system. See for example, my BlackHat 2003 talk. But I think that support can be both rah-rah and a careful critique of what we’re building.
So as Facebook continues to trade at a little over half of their market capitalization of 3 months ago, I think we can learn a few very interesting things. My goal here is not to pick on Facebook, but rather to see what we can take away and perhaps apply elsewhere. I think there are three key lessons that we can take away:
- The Privacy Invasion Gnomes are Wrong
- Intent Beats Identity
- Maximizing your IPO returns may be a short term strategy
Let me start with the “Privacy Invasion Gonmes.” The short form of their strategy is:
- Gather lots of data on people
This is, of course, a refinement of the original Gnome Strategy. But what Facebook shows us is:
The Privacy Invasion Gnomes are Wrong
Gathering lots of data on people is a popular business strategy. It underlies a lot of the advertising that powers breathless reporting on the latest philosophical treatise by Kim Kardashian or Paris Hilton.
But what Facebook shows us is that just gathering data on people is actually insufficient as a business strategy, because knowing that someone is a a Democrat or Republican just isn’t that valuable. It’s hard to capitalize on knowing that a user is Catholic or Mormon or Sikh. There’s a limit to how much money you make being able to identify gays who are still in the closet.
All of which means that the security industry’s love affair with “identity” is overblown. In fact, I’m going to argue that intent beats identity every time you can get it, and you can get it if you…keep your eye on the ball.
Intent beats Identity
The idea that if you know someone, you can sell them what they need is a powerful and intuitive one. We all love the place where everyone knows your name. The hope that you can translate it into an algorithm to make it scale is an easy hope to develop.
But many of the businesses that are raking in money hand-over foot on the internet aren’t doing that. Rather, they’re focused on what you want right now. Google is all about that search box. And they turn your intent, as revealed by your search, into ads that are relevant.
Sure, there’s some history now, but fundamentally, there’s a set of searches (like “asbestos” and “car insurance”) that are like kittens thrown to rabid wolves. And each of those wolves will pay to get you an ad. Similarly, Amazon may or may not care who you are when they get you to buy things. Your search is about as direct a statement of intent as it gets.
The graph is from Seeking Alpha’s post, “Facebook: This Is The Bet You Are Making.”
So let me point out that two of these companies, Facebook and LinkedIn, have great, self-reinforcing identity models. Both use social pressure to drive self-representation on the site to match self-representation in various social situations. That’s pretty close to the definition of identity. (In any event, it’s a lot closer than anyone who talks about “identity issuance” can get.) And both make about 1/9th of what Google does on intent.
Generally in security, we use identification because it’s easier than intent, but what counts is intent. If a fraudster is logging into Alice’s account, and not moving money, security doesn’t notice or care (leaving privacy aside). If Alice’s husband Bob logs in as Alice, that’s a failure of identity. Security may or may not care. If things are all lovey-dovey, it may be fine, but if Bob is planning a divorce, or paying off his mistress, then it’s a problem. Intent beats identity.
Maximizing your IPO returns may be a short term strategy
The final lesson is from Don Dodge, “How Facebook maximized the IPO proceeds, but botched the process.” His argument is a lot stronger than the finger-pointing in “The Man Behind Facebook’s I.P.O. Debacle“. I don’t have a lot to add to Don’s point, which he makes in detail, so you should go read his piece. The very short form is that by pricing as high as they did, they made money (oodles of it) on the IPO, and that was a pretty short-term strategy.
Now, if Facebook found a good way to get intent-centered, and started making money on that, botching the IPO process would matter a lot less. But that’s not what they’re doing. The latest silliness is using your mobile number and email to help merchants
stalk find you on the site. That program represents a triumph of identity thinking over intent thinking. People give their mobile numbers to Facebook to help secure their account. Facebook then violates that intent to use the data for marketing.
So, I think that’s what we can learn from the Facebook stock slide. There may well be other lessons in an event this big, and I’d love to hear your thoughts on what they might be.
Every now and then, a headline helps us see the answer to the question “Will people ever pay for Privacy?”
Quoth the Paper of record:
The seclusion may be the biggest selling point of the estate belonging to Robert Hurst, a former executive at Goldman Sachs, which was just listed by Debbie Loeffler of the Corcoran Group for $65 million.
There’s more in the article.
I’ve observed a phenomenon in computer security: when you want something to be easy, it’s hard, and when you want the same thing to be hard, it’s easy. For example, hard drives fail at seemingly random, and it’s hard to recover data. When you want to destroy the data, it’s surprisingly hard.
I call this my law of perversity in computer security.
Today, Kashmir Hill brings a great example in “So which is it?”
Contradiction much? When it comes to the state of online privacy, the media tend to send mixed messages, but this is one of the more extreme examples I’ve seen.
It’s just perverse: it’s hard to be sure when someone wants to rely on the data to protect kids, but it’s easy (for marketing firms) when we prefer to remain private.