Democracy, Gunpowder, Literacy and Privacy

In an important sense, privacy is a modern invention. Medieval people had no concept of privacy. They also had no actual privacy. Nobody was ever alone. No ordinary person had private space. Houses were tiny and crowded. Everyone was embedded in a face-to-face community. Privacy, as idea and reality, is the creation of a modern bourgeois society. Above all, it is a creation of the nineteenth century. In the twentieth century it became even more of a reality. [p. 258]

In a time when amorphous “rights” to privacy seem to be multiplying like wildflowers, this is an important insight from Friedman. In my opinion, many of the creative privacy theories being concocted today are often based on false nostalgia about some forgotten time in the past when we supposedly all had our own little quiet spaces that were completely free from privacy intrusions. But as Friedman makes clear, this is largely a myth. It’s not to say that there aren’t legitimate issues out there today. But it’s important that we place modern privacy issues in a larger historical context and understand how many of today’s concerns pale in comparison to the problems of the past.

So writes Adam Theierer in “Privacy as ‘a modern invention’,” quoting Stanford law prof Lawrence Friedman.

Medieval people also didn’t have democracy, gunpowder or widespread literacy. That makes none of them the creation of “a modern bourgeois society.”

It’s a tad embarrassing, really.

Maybe, with more time, there would be more context which I could find.

How to Present

As I get ready to go to South Africa, I’m thinking a lot about presentations. I’ll be delivering a keynote and a technical/managerial talk at the ITWeb Security Summit. The keynote will be on ‘The Crisis in Information Security’ and the technical talk on Microsoft’s Security Development Lifecycle.


As I think about how to deliver each of these talks, I think about what people will want from each. From a keynote, there should be a broad perspective, aiming to influence the agenda and conversation for the day, the conference and beyond. For a technical talk, I’m starting from “why should we care” and sharing experiences in enough depth that the audience gets practical lessons they can apply to their own work.

Part of being a great presenter is watching others present, and seeing what works for them and what doesn’t. And part of it is watching yourself (painful as that is). Another part is listening to the masters. And in that vein, Garr Reynolds has a great post “Making presentations in the TED style:”

TED has earned a lot of attention over the years for many reasons, including the nature and quality of its short-form conference presentations. All presenters lucky enough to be asked to speak at TED are given 18-minute slots maximum (some are for even less time such as 3- and 6-minute slots). Some who present at TED are not used to speaking on a large stage, or are at least not used to speaking on their topic with strict time restraints. TED does not make a big deal publicly out of the TED Commandments, but many TED presenters have referenced the speaking guidelines in their talks and in their blogs over the years (e.g., Ben Saunders).

Ironically, he closes with:

Bill Gates vs. Bill Gates
Again, you do not have to use slides at TED (or TEDx, etc.), but if you do use slides, think of using them more in the style of Bill Gates the TEDster rather than Bill Gates the bullet point guy from the past. As Bill has shown, everyone can get better at presenting on stage.

bill-vs-bill.jpg

I’ll be doing some of both. As both Reynolds and Bill understand, there are better and worse styles. Different styles work well for different people. There’s also a time and a place for each good style of presentation. Understanding yourself, your audience and goals are essential to doing any presentation well.

Of course, style only matters if you’re a professional entertainer, or have something interesting to say. I try hard to be in the latter category.

If you’re in Johannesburg, come see both talks. I’m looking forward to meeting new people, and would love to hear your feedback on either talk, either on the content or the style.

TSA Kills Bad Program!

The government is scrapping a post-Sept. 11, 2001, airport screening program because the machines did not operate as intended and cost too much to maintain.

The so-called puffer machines were deployed to airports in 2004 to screen randomly selected passengers for bombs after they cleared the standard metal detectors. The machines take 17 seconds to check a passenger and can analyze particles as small as one-billionth of a gram. (“An Airport Screening Program Is Killed,” New York Times

Via Froomkin. I hear they’re investing the saved money in a porcine catapult.

[Update: It turns out that TSA will not be allowing pigs to fly. Their implanted ID chips are not government issued, and when challenged, they do not demonstrate a willingness to cooperate with the TSA officials. Sorry.]

Web 2.0 and the Federal Government

This looks interesting, especially in light of the launch of data.gov:

The Obama campaign—and now the Obama administration—blazed new trail in the use of Web 2.0 technology, featuring videos, social networking tools, and new forms of participatory and interactive technology. This event will feature government, technology, and new media leaders in addressing the special challenges and opportunities of doing Web 2.0 in the federal government. Please join this exciting discussion moderated by American Progress Senior Fellow Peter Swire, who also served as counsel to the New Media team for change.gov and the revision of whitehouse.gov.

Panelists:

  • Tim O’Reilly, Founder and CEO of O’Reilly Media, Inc.
  • Alec Ross, Senior Advisor for Innovation to Secretary of State Hillary Clinton, charged with blending new information technologies with diplomacy.
  • Faiz Shakir, Research Director for The Progress Report and ThinkProgress.org at Center for American Progress Action Fund.

It will be webcast, details at Web 2.0 and the Federal Government

Giving Circles and de Tocqueville

There was an interesting story on NPR the other day about “giving circles.” It’s about groups of people getting together, pooling their money, investigating charities together, and then giving money.

The story mentions how the increasing bureaucratization* of fund-raising leads to groups whose involvement is “I write them a cheque each year.”

It also mentions that the folks doing the investigation end up volunteering their time and getting involved:

“Even if we don’t feel like we’re giving away a lot of money, I think it’s just building in commitment that’ll expand to other things that we do,” she says. “So beyond our involvement in this giving circle, I think we’re all probably going to be more engaged with our communities overall.” (“Donors Turn To Giving Circles As Economy Drops“)

de Tocqueville would be shocked, shocked to discover that actually speaking to other people would increase civil involvement. But as we bureaucratize, background check and formalize every bit of volunteering, more and more people choose to stay away.

*The spell checker knows that word. How sad!)

Can’t Win? Re-define losing the TSA Way!

We were surprised last week to see that the GAO has issued a report certifying that, “As of April 2009, TSA had generally achieved 9 of the 10 statutory conditions related to the development of the Secure Flight program and had conditionally achieved 1 condition (TSA had defined plans, but had not completed all activities for this condition).”

Surprised, that is, until we we saw how the GAO had defined (re-defined?) those statutory conditions in ways very different from what we thought they meant, or what we think Congress thought they meant.

Read the details at “GAO moves the goalposts to “approve” Secure Flight

Just Landed in…

Just Landed: Processing, Twitter, MetaCarta & Hidden Data:

This got me thinking about the data that is hidden in various social network information streams – Facebook & Twitter updates in particular. People share a lot of information in their tweets – some of it shared intentionally, and some of it which could be uncovered with some rudimentary searching. I wondered if it would be possible to extract travel information from people’s public Twitter streams by searching for the term ‘Just landed in…’.

just-landed.jpg

This is a cool emergent effect of people chaotically announcing themselves on Twitter, a MetaCarta service that allows you to get longitude/latitude and a bunch of other bits all coming together to make something really cool looking.

Via Information Aesthetics

Need ID to see Joke ID card

A bunch of folks sent me links to this Photography License, which also found its way to BoingBoing:

3514238906_2db2dc0a92.jpg

Now, bizarrely, if you visit that page, Yahoo wants you to show your (Yahoo-issued) ID to see (Matt’s self-issued) ID.

It’s probably a bad idea to present a novelty version of a DHS document to law enforcement.

It’s a worse idea to live in a country where someone sees enough harassment of photographers to design such a thing so well.

The very worst idea, however, is to discover pressure to send the whole thing down the memory hole.

Twitter Bankruptcy and Twitterfail

If you’re not familiar with the term email bankruptcy, it’s admitting publicly that you can’t handle your email, and people should just send it to you again.

A few weeks ago, I had to declare twitter bankruptcy. It just became too, too much. I’ve been meaning to blog about it since, but things have just been too, too much. Shortly after I did, The Guardian published their hilarious April Fools article about shifting to an all-twitter format. I found it especially funny because they made several digs at Stephen Fry, the very person who drove me to twitter bankruptcy.

In Mr. Fry’s case, he’s literate, funny, worth listening to, and prolific. These traits in a twitter user are horrible as his content dominates the page over all the other tweets. The problem was twofold: I couldn’t keep up with Mr. Fry alone, and yet having removed him, a graph of the interestingness quotient of my twitter page resembled an economic report.

I discussed this with some other friends, one of whom is my favorite twitterer, because he has some magic scraper that puts his tweets into an RSS feed on his blog and I can read them at my leisure.

I opined that what I really need from twitter is streams separated into separate pages with metadata about how many unread tweets there are from each person I follow, and a way to look at them in a block. That way, I can look at Mr. Fry’s tweets, note that there’s a Mersenne prime number of them unread, and catch up.

In short, I want twitter to either an RSS feed or an email box. Either is fine.

One of my friends said that perhaps what Mr. Fry should do is put his tweets together into paragraphs, the paragraphs into essays, and then collect the essays in a book.

She also pointed out that twitter is perhaps the first Internet medium which does not level social hierarchies, but creates and reinforces them. The numbers of people following whom, who is attentively watching whose tweets and so on recreates a high-school-like social structure.

This brings us to #twitterfail, the current brou-ha-ha about a change in twitter rules in which direct messages only go to people who are following people who are following those who are following — someone.

The #twitterfail channel is a bunch of people retweeting that they think this is a bad idea. There is apparently no channel for retweeting if you think it’s a good idea.

Valleywag thinks it is a good idea in their article, “Finally, Twitter Learns When to Shut Up,” pointing out a Nielsen report that 60% of new twitter users drop out after signing up. This might be a way to cut down the noise level for people who are newbies, according to Valleywag.

Others see it as a way to further reinforce the status hierarchies. The brash and ever entertaining Prokovy Neva says:

What [various twitterati, none of whom is Stephen Fry] all have in common is an overwhelming desire to have lots of “friends” who follow them, but they want them to be loyal, positive, and not talk back, except to warble about how they’ve read their books or gush about how wonderful they are.

What they definitely, definitely DO NOT like is when people they aren’t following talk back to them using @. They hate it. It gets them into a frenzy.

I think they’re both right. I think that the sheer noise level of twitter combined with a wretched UI makes it unusable for people who have a long multitasking quantum. My twitter page goes back a mere seven hours, and Beaker has only said one thing (I hope he’s not sick). If I go to a long meeting or get on an airplane, I’ve lost context.

There are two behavioral feedback loops I see. Sometimes one twitters because one is twittering, which drives more twittering. The other is that one is not twittering because one is not twittering which drives not wanting to look at twitter.

Cutting down on the noise level would help people get into twittering, but not as much as Valleywag thinks. Twitter’s systems and subsystems are power-law driven (which is the same thing as saying they’re human status hierarchies). If you’re a newbie, noise isn’t really the problem, the problem is figuring out who you want to follow and wondering why you should bother tweeting into an empty room.

Prokovy Neva is right, too. The social circles that twitter creates are lopsided, and power-law in scale (which is why the whale is up so much). An even playing field for replies means that people who have lots of followers but follow few others not only don’t see messages from people they don’t know, but can have a nice civil public conversation with the few people they follow without having to know about the riff-raff. Right now, the downside of having lots of followers is that you can be on the receiving end of that power law. Over the long haul, that will lead to self-monitoring on the tweets, having tweets handled by assistants (which already goes on), or just giving up on it all.

I suspect that twitter will reverse this change (if they haven’t already) at least in part because there’s no channel of retweeting for people who like the change. Perhaps most of all, I think they realize that reinforcing the hierarchies to that degree would indeed make the twitter fad fade even faster than it would otherwise.

That seems to itself be inevitable, since it’s now been reported what should surprise no one — spammers are gleaning email addresses from tweets in real time as well as using twitter trending to drive uptake. That tweeting opens one up to spam will tend to put the brakes on it.

Camera thanks!

An enourmous thank you to everyone who offered advice on what camera to get.

I ended up with a Canon Rebel after heading to a local camera store and having a chance to play with the stabilization features. It may end up on ebay, but I’m confident I’ll get high quality pictures. If they’re great, of course, depends on my skills.

I hesitate to even ask, but what one book have you seen most help someone learn how to take great pictures? I want something that’s focused on how to orient & frame shots, not something on the technical side. The camera knows more about that than I ever plan to. So what one book would you suggest?

I’m thinking about the Rebel for Dummies book, since it covers both technical and artistic aspects. What book have you seen help others more?

I wrote code for a botnet today

There’s a piece of software out there trying to cut down on blog spam, and it behaves annoyingly badly. It’s bad in a particular way that drives me up the wall. It prevents reasonable behavior, and barely blocks bad behavior of spammers.

In particular, it stops all requests that lack an HTTP Referer: header. All requests. Not just POST to the comment CGI, which might appear to make sense. Not just POST. All requests.

There’s two problems with this. First, it assumes a static attacker, which is a poor descriptor of spammers. Second, it has high auxiliary costs.

So I wrote 28 characters of code for a spamming botnet. This assumes that there’s a variable “site” which is getting spammed, and gets inserted in the header printing block:

printf("Referer: %sn", site);

That’s it. I just broke the “Bad Behavior” plugin, because that’s what the comment link referer will look like. (If I were to put in site, path, that would be about 4 lines of code. Mostly because it’s been long enough since I’ve dealt with C string handling I’d have to look up how to split the string and drop the last component.) I’d link to it, but you know, I can’t see the site.

Incidentally, I didn’t contribute that code anywhere. It’s a thought experiment, which Bad Behavior’s author should have done years ago.

Good security design takes into account obvious next steps by attackers. It considers impacts on privacy and liberty. Missing those, security designs are at best acceptable, and at worst oppressive.

[Update: I realized I’m violating my own advice here, by saying “that’s wrong.” So let me be prescriptive: Don’t use the referer header for security. Just don’t. Don’t even try. You might try to redesign blog posting to take into account a particular blog post, but that would require breaking commenting directly from the front page of a blog.] [Update 2, added link to WMV video around ‘my own advice.]

My Wolfram Alpha Demo

I got the opportunity a couple days ago to get a demo of Wolfram Alpha from Stephen Wolfram himself. It’s an impressive thing, and I can sympathize a bit with them on the overblown publicity. Wolfram said that they didn’t expect the press reaction, which I both empathize with and cast a raised eyebrow at.

There’s no difference, as you know, between an arbitrarily advanced technology and a rigged demo. And of course anyone whose spent a lot of time trying to create something grand is going to give you the good demo. It’s hard to know what the difference is between a rigged demo and a good one.

The major problem right now with Alpha is the overblown publicity. The last time I remember such gaga effusiveness it was over the Segway before we knew it was a scooter.

Alpha has had to suffer through not only its creator’s overblown assessments, but reviews from neophiles whose minds are so open that their occipital lobes face forward.

My short assessment is that it is the anti-Wikipedia and makes a huge splat on the fine line between clever and stupid, extending equally far in both directions. What they’ve done is create something very much like the computerized idiot savant. As much as that might sound like criticism, it isn’t. Alpha is very, very, very cool. Jaw-droppingly cool. And it is also incredibly cringe-worthily dumb. Let me give some examples.

Stephen gave us a lot of things that it can compute and the way it can infer answers. You can type “gdp france / germany” and it will give you plots of that. A query like “who was the president of brazil in 1930” will get you the right answer and a smear of the surrounding Presidents of Brazil as well.

It also has lovely deductions it makes. It geolocates your IP address and so if you ask it something involving “cups” it will infer from your location whether that should be American cups or English cups and give you a quick little link to change the preference on that. Very, very, clever.

It will also use your location to make other nice deductions. Stephen asked it a question about the population of Springfield, and since he is in Massachusetts, it inferred that Springfield, and there’s a little pop-up with a long list of other Springfields, as well. It’s very, very clever.

That list, however, got me the first glimpse of the stupid. I scanned the list of Springfields and realized something. Nowhere in that list appeared the Springfield of The Simpsons. Yeah, it’s fictional, and yeah that’s in many ways a relief, but dammit, it’s supposed to be a computational engine that can compute any fact that can be computed. While that Springfield is fictional, its population is a fact.

The group of us getting the demo got tired of Stephen’s enthusiastic typing in this query and that query. Many of them are very cool but boring. Comparing stock prices, market caps, changes in portfolio whatevers is something that a zillion financial web sites can do. We wanted more. We wanted our queries.

My query, which I didn’t ask because I thought it would be disruptive, is this: Which weighs more, a pound of gold or a pound of feathers? When I get to drive, that will be the first thing I ask.

The answer, in case you don’t know this famous question is a pound of feathers. Amusingly, Google gets it on the first link. Wolfram emphasizes that Alpha computes and is smart as opposed to Google just dumbly searching and collating.

I also didn’t really need to ask because one of the other people asked Alpha to plot swine flu in the the US, and it came up with — nil. It knows nothing about swine flu. Stephen helpfully suggested, “I can show you colon cancer instead” and did.

And there it is, the line between clever and stupid, and being on both sides of it. Alpha can’t tell you about swine flu because the data it works on is “curated,” meaning they have experts vet it. I approve. I’m a Wikipedia-sneerer, and I like an anti-mob system. However, having experts curate the data means that there’s nothing about the Springfield that pops to most people’s minds (because it’s pop culture) nor anything about swine flu. We asked Stephen about sources, and specifically about Wikipedia. He said that they use Wikipedia for some sorts of folk knowledge, like knowing that The Big Apple is a synonym for New York City but not for many things other than that.

Alpha is not a Google-killer. It is not ever going to compute anything that can be computed. It’s a humorless idiot savant that has an impressive database (presently some ten terabytes, according to the Wolfram folks), and its Mathematica-on-steroids engine gives a lot of wows.

On the other hand, as one of the people in my demo pointed out, there’s not anything beyond a spew of facts. Another of our queries was “17/hr” and Alpha told us what that is in terms of weekly, monthly, yearly salary. It did not tell us the sort of jobs that pay 17 per hour, which would be useful not only to people who need a job, but to socioeconomic researchers. It could tell us that, and very well might rather soon. But it doesn’t.

Alpha is an impressive tool that I can hardly wait to use (supposedly it goes on line perhaps this week). It’s something that will be a useful tool for many people and fills a much-needed niche. We need an anti-Wikipedia that has only curated facts. We need a computational engine that uses deductions and heuristics.

But we also need web resources that know about a fictional Springfield, and resources that can show you maps of the swine flu.

We also need tech reviewers who have critical faculties. Alpha is not a Google-killer. It’s also not likely as useful as Google. The gushing, open-brained reviews do us and Alpha a disservice by uncritically watching the rigged demo and refusing to ask about its limits. Alpha may straddle the line between clever and stupid, but the present reviewers all stand proudly on stupid.

Camera advice bleg

I’m thinking about maybe getting a new camera.

Before I say anything else let me say that I understand that sensor size and lens rule all else, and that size does matter, except when it’s megapixel count, which is a glamour for the foolish.

That said, I’m off to South Africa in a few weeks, and while my Canon S410 was a fine camera 5 years ago, I’m thinking that for a trip like this with a safari in the middle, I should get something that sucks less. I don’t really care about GPS or interchangable lenses. (Yes, I should. You’re so right. But I don’t want to be bothered. I’m not a great photographer.)

I don’t want to have a full-bore SLR, as nice as they are. They’re too big, I won’t carry it enough to really justify what it is. So if I want to spend less than a thousand bucks (ideally < $500), have something that doesn't require its own carrying case or manual, what's the current hotness?

Are any of these “micro-four thirds” available? Worth risking? Worth overcoming my “don’t want to bother with lenses?” Should I look at a something like a Nikon Coolpix P6000? = Is it worth getting a new phd mini camera?

Ban Whole Body Imaging

radar image of naked woman

Congressman Jason Chaffetz has introduced legislation seeking a ban on Whole-Body Imaging machines installed by the Transportation Security Administration in various airports across America. Describing the method as unnecessary to securing an airplane, Congressman Chaffetz stated that the new law was to “balance the dual virtues of safety and privacy.” The TSA recently announced plans to make the scanners, which capture a detailed picture of travelers stripped naked, the default screening device at all airport security checkpoints. Whole Body imaging (Backscatter X-Ray) technology was introduced as a tool for screening some air travelers.

Read “Congressman Seeks End of Whole Body Imaging at Airports” for the links.

These scanners won’t make us more secure. Our wallets and our dignity can’t afford these scanners. Kudos to Congressman Chaffetz.

As an aside, searching for this image (which we’ve used before) required turning off Google’s “SafeSearch.” If Google won’t show that image, why should you be forced to pose for it?

Previously: “TSA to Look Through Your Clothes” and “TSA Violates Your Privacy, Ties themselves in Little Knot of Lies