In “Is Your Religion Your Financial Destiny?,” the New York Times presents the following chart of income versus religion:
Note that it doesn’t include the non-religious, which one might think an interesting group as a control. Now, you might think that’s because the non-religious aren’t in the data set. But you’d be wrong. In the data set are atheists, agnostics and “nothing in particular.” That last includes 6.3% of the population as “secular unaffiliated” and another 5.8% as “religious unaffiliated.” Now, 6.3% is more than all non-Christian religions combined. Many of those non-Christian religions are shown in the graphic. Athiest, at 1.6%, is almost as large as Jewish, a major focus of the article, and 4 times larger than Hindus.
Now, you might also argue that athiests were left out because there were too few in the sample (as opposed to demographic data.) But there were 439 athiests, and 251 reform Jews.
Chris Wyspoal pointed out that atheists land after Hindus and Jews for 75k+ incomes.
All the news that’s fit to print, indeed.
My talk at Black Hat this year was “Elevation of Privilege, the Easy Way to Get Started Threat Modeling.” I covered the game, why it works and where games work. The link will take you to the PPTX deck.
As I get ready to go to South Africa, I’m thinking a lot about presentations. I’ll be delivering a keynote and a technical/managerial talk at the ITWeb Security Summit. The keynote will be on ‘The Crisis in Information Security’ and the technical talk on Microsoft’s Security Development Lifecycle.
As I think about how to deliver each of these talks, I think about what people will want from each. From a keynote, there should be a broad perspective, aiming to influence the agenda and conversation for the day, the conference and beyond. For a technical talk, I’m starting from “why should we care” and sharing experiences in enough depth that the audience gets practical lessons they can apply to their own work.
Part of being a great presenter is watching others present, and seeing what works for them and what doesn’t. And part of it is watching yourself (painful as that is). Another part is listening to the masters. And in that vein, Garr Reynolds has a great post “Making presentations in the TED style:”
TED has earned a lot of attention over the years for many reasons, including the nature and quality of its short-form conference presentations. All presenters lucky enough to be asked to speak at TED are given 18-minute slots maximum (some are for even less time such as 3- and 6-minute slots). Some who present at TED are not used to speaking on a large stage, or are at least not used to speaking on their topic with strict time restraints. TED does not make a big deal publicly out of the TED Commandments, but many TED presenters have referenced the speaking guidelines in their talks and in their blogs over the years (e.g., Ben Saunders).
Ironically, he closes with:
Bill Gates vs. Bill Gates
Again, you do not have to use slides at TED (or TEDx, etc.), but if you do use slides, think of using them more in the style of Bill Gates the TEDster rather than Bill Gates the bullet point guy from the past. As Bill has shown, everyone can get better at presenting on stage.
I’ll be doing some of both. As both Reynolds and Bill understand, there are better and worse styles. Different styles work well for different people. There’s also a time and a place for each good style of presentation. Understanding yourself, your audience and goals are essential to doing any presentation well.
Of course, style only matters if you’re a professional entertainer, or have something interesting to say. I try hard to be in the latter category.
If you’re in Johannesburg, come see both talks. I’m looking forward to meeting new people, and would love to hear your feedback on either talk, either on the content or the style.
I got the opportunity a couple days ago to get a demo of Wolfram Alpha from Stephen Wolfram himself. It’s an impressive thing, and I can sympathize a bit with them on the overblown publicity. Wolfram said that they didn’t expect the press reaction, which I both empathize with and cast a raised eyebrow at.
There’s no difference, as you know, between an arbitrarily advanced technology and a rigged demo. And of course anyone whose spent a lot of time trying to create something grand is going to give you the good demo. It’s hard to know what the difference is between a rigged demo and a good one.
Alpha has had to suffer through not only its creator’s overblown assessments, but reviews from neophiles whose minds are so open that their occipital lobes face forward.
My short assessment is that it is the anti-Wikipedia and makes a huge splat on the fine line between clever and stupid, extending equally far in both directions. What they’ve done is create something very much like the computerized idiot savant. As much as that might sound like criticism, it isn’t. Alpha is very, very, very cool. Jaw-droppingly cool. And it is also incredibly cringe-worthily dumb. Let me give some examples.
Stephen gave us a lot of things that it can compute and the way it can infer answers. You can type “gdp france / germany” and it will give you plots of that. A query like “who was the president of brazil in 1930″ will get you the right answer and a smear of the surrounding Presidents of Brazil as well.
It also has lovely deductions it makes. It geolocates your IP address and so if you ask it something involving “cups” it will infer from your location whether that should be American cups or English cups and give you a quick little link to change the preference on that. Very, very, clever.
It will also use your location to make other nice deductions. Stephen asked it a question about the population of Springfield, and since he is in Massachusetts, it inferred that Springfield, and there’s a little pop-up with a long list of other Springfields, as well. It’s very, very clever.
That list, however, got me the first glimpse of the stupid. I scanned the list of Springfields and realized something. Nowhere in that list appeared the Springfield of The Simpsons. Yeah, it’s fictional, and yeah that’s in many ways a relief, but dammit, it’s supposed to be a computational engine that can compute any fact that can be computed. While that Springfield is fictional, its population is a fact.
The group of us getting the demo got tired of Stephen’s enthusiastic typing in this query and that query. Many of them are very cool but boring. Comparing stock prices, market caps, changes in portfolio whatevers is something that a zillion financial web sites can do. We wanted more. We wanted our queries.
My query, which I didn’t ask because I thought it would be disruptive, is this: Which weighs more, a pound of gold or a pound of feathers? When I get to drive, that will be the first thing I ask.
The answer, in case you don’t know this famous question is a pound of feathers. Amusingly, Google gets it on the first link. Wolfram emphasizes that Alpha computes and is smart as opposed to Google just dumbly searching and collating.
I also didn’t really need to ask because one of the other people asked Alpha to plot swine flu in the the US, and it came up with — nil. It knows nothing about swine flu. Stephen helpfully suggested, “I can show you colon cancer instead” and did.
And there it is, the line between clever and stupid, and being on both sides of it. Alpha can’t tell you about swine flu because the data it works on is “curated,” meaning they have experts vet it. I approve. I’m a Wikipedia-sneerer, and I like an anti-mob system. However, having experts curate the data means that there’s nothing about the Springfield that pops to most people’s minds (because it’s pop culture) nor anything about swine flu. We asked Stephen about sources, and specifically about Wikipedia. He said that they use Wikipedia for some sorts of folk knowledge, like knowing that The Big Apple is a synonym for New York City but not for many things other than that.
Alpha is not a Google-killer. It is not ever going to compute anything that can be computed. It’s a humorless idiot savant that has an impressive database (presently some ten terabytes, according to the Wolfram folks), and its Mathematica-on-steroids engine gives a lot of wows.
On the other hand, as one of the people in my demo pointed out, there’s not anything beyond a spew of facts. Another of our queries was “17/hr” and Alpha told us what that is in terms of weekly, monthly, yearly salary. It did not tell us the sort of jobs that pay 17 per hour, which would be useful not only to people who need a job, but to socioeconomic researchers. It could tell us that, and very well might rather soon. But it doesn’t.
Alpha is an impressive tool that I can hardly wait to use (supposedly it goes on line perhaps this week). It’s something that will be a useful tool for many people and fills a much-needed niche. We need an anti-Wikipedia that has only curated facts. We need a computational engine that uses deductions and heuristics.
But we also need web resources that know about a fictional Springfield, and resources that can show you maps of the swine flu.
We also need tech reviewers who have critical faculties. Alpha is not a Google-killer. It’s also not likely as useful as Google. The gushing, open-brained reviews do us and Alpha a disservice by uncritically watching the rigged demo and refusing to ask about its limits. Alpha may straddle the line between clever and stupid, but the present reviewers all stand proudly on stupid.
In re-reading my blog post on twittering during a conference I realized it sounded a lot more negative than I’d meant it to.
I’d like to talk about why I see it as a tremendous positive, and will be doing it again.
First, it engages the audience. There’s a motive to pay close attention and share what you hear. They’re using their laptops for good, not evil.
Second, it multiplies the attention to the talk. The talk was standing room only, but the room held fewer than 100 people. The people who tweeted had 5,300 followers. Now, that’s total followers, not unique (does anyone have an easy way to calculate that?) It’s also unlikely that many of them were reading Twitter or read backscroll, but it seems like an ok guess to say that 200-500 people saw some mention of the talk on Twitter.
Third, it promotes the audience from passive to engaged (although that wasn’t a problem for my audience, I’ve seen it in other talks). They’re no longer just listeners, they’re interpreting, quoting, and generating additional content as we engaged around the ideas in the talk.
What chaotically emerged is larger than my talk. It’s a conversation.
A few weeks back, Pistachio twittered about How to Present While People are Twittering. I picked it up, and with the help of Quine, was getting comments from Twitter as I spoke. It was a fun experiment, and it’s pretty cool to be able to go back and look at the back channel.
[Update: I think there was more positive than I really touched on, and have written a new post all atwitter about why it was useful and why I'll do it again.]
I don’t think that it was hugely successful for this talk for two reasons. First, my talk, “The Crisis In Information Security” is a ‘big idea’ talk, based on my book “The New School of Information Security,” written with Andrew Stewart.
A big idea talk has to cover a lot of ground quickly, rather than dwell on a lot of specifics–you can see some of that feedback, Rich Mogull comments on “I said some of that a year ago,” and B.K. Delong says “can we have more details?” The other reason it didn’t work is because there was a lot of in-room interaction. Questions came out during the talk rather than being tweeted.
Still, it was pretty cool, and I’ll definitely try it again.
So, here are the #sourceadam comments in chronological order. My comments are in italic.
stormtrooperguy: All tweets from the current panel @sourceboston will be tagged with #sourceadam so that they can reference it later.
leune: getting ready for #sourceadam
quine: Actually, #SOURCEAdam or #AdamSOURCE.
bkdelong: At Adam Shostack’s talk #sourceadam
securitytwits: RT @quine — if you’re in @adamshostack’s presentation at #SOURCEBoston, please use #adamsource OR #sourceadam for feedback/questions.
quine: Admittedly, I am a buffoon. I chose “#adamsource”, then announced “#sourceadam” — hence the use of both
Beaker: I believe I just saw a nerd version of Sysyphus — better than a LOLcat #sourceadam #sourceboston
Beaker: Who was the last idiot infected with Blaster? We just saw the last guy who had Smallpox…. #sourceadam #sourceboston
mortman: @Beaker Well lolcats are beneath Adam #sourceadam #sourceboston
mortman: Milliken Oildrop Experiment lead to powerpoint. #sourceadam #sourceboston
mortman: @alexsotirov @k8em0 has an apple and the rest of us don’t. #sourceadam #sourceboston
k8em0: @alexsotirov we lack cred in infosec because we lack data #sourceboston #sourceadam
hackertweets: k8em0: @alexsotirov we lack cred in infosec because we lack data #sourceboston #sourceadam
k8em0: @mortman @alexsotirov it’s a pear. Observation is not the best way to gather data.#sourceboston #sourceadam
mortman: @k8em0 @alexsotirov Proof that independent confirmation is a necessary part of the scientific method. #sourceboston #sourceadam
bkdelong: @k8em0 At least not VISUAL observation #sourceadam #sourceboston
mortman: #sourceadam #sourceboston Re: learning from experience. Is that another way of saying “the plural of anecdote is not data”?
stormtrooperguy: @sourceboston : the #sourceadam panel is packed, standing room only.
Beaker: Adam, you have a lot of “questions.” You have any answers? #sourceadam
I think I do. If not, you have a refund coming. (Hoff bought the book on his Kindle as we were setting up. I promised him a refund if he doesn’t like it.)
bkdelong: So @adamshostack what data is being collected that is good? What do we NEED to be collecting? #sourceadam #sourceboston
bkdelong: Specifically what KPIs and what metrics / risk calculations can we be doing to help us make the case to management #sourceadam @sourceboston
What does your management care about? You’re going to need rich sets of data to find the comparatives you need
mortman: #sourceadam #sourceboston RE: What is the biggest pain point? We talk about professional hackers, users, random loss, why not vendors?
mortman: #sourceadam #sourceboston Why not more blame for the folks who produce crap?
k8em0: it’s hard to categorize what causes security customer pain (hax0rs, kiddiez, RBN, nation-states) #sourceboston #sourceadam
rybolov: #sourceadam can you use the phrase “self-licking ice cream cone” jus for me? k thnx.
Self licking ice cream cone
hallam: @SOURCEAdam have you heard of the GENI initiative, any thoughts?
mortman: @hallam geni.net? or something else #sourceadam #sourceboston
I haven’t, thanks! Checking it out now.
bkdelong: The @datalossdb does not cover all breaches and too many reporters cite it as true total # of breaches – bad. Needs correction #sourceadam
BK: True, but as the Beatles said, it’s getting better all the time.
k8em0: #sourceboston #sourceadam Hype is too big for your breaches – they don’t cause all customers to flee & you to go bankrupt.
mortman: #sourceboston #sourceadam Mmmmm tylenol.
bkdelong: Tylenol Recall #sourceboston #sourceadam (expand)
bkdelong: The @datalossdb certainly best out there but there are lots of unreported/non-FOIA’d breaches not in there. Still a lot more. #sourceadam
bkdelong: More on Black Swan theory – http://tinyurl.com/2ngwkw (expand) (Yes, wikipedia for ease sake) #sourceadam #sourceboston
I was pretty dismissive of “Black Swan” hype. I stand by that, and don’t think we should allow fear of a black swan out there somewhere to prevent us from studying white ones and generalizing about what we can see.
rmogull: @bkdelong #sourceadam #sourceboston I wrote an article on that over a year ago (Tylenol/disclosure): http://bit.ly/Q5Ko8 (expand)
Great stuff, Rich!
mortman: #sourceboston #sourceadam Check out “research revealed” tracke at RSA.
k8em0: #sourceboston #sourceadam wallow in the data, follow @datalossdb for example.
bsmithsweeney: #sourceadam reminded of “The Quixotic Quest for Invulnerability” http://tinyurl.com/5equfo (expand), on protection vs. recovery #sourceboston
k8em0: #sourceboston #sourceadam you point out methodological flaws w/the passwords4chocolate experiment. 45% of women likely lied 4 choc.
It would be fun to find out how many lied, and how many didn’t care. I suspect we’d be depressed, but the truth is supposed to set you free, not make you happy.
bsmithsweeney: Really enjoyed #sourceadam talk @sourceboston. Definitely worth grabbing the slides/video.
Thanks bsmithsweeney, and thank you to everyone who participated in the talk and the backchannel!
Blogging about your own presentations is tough. Some people post their slides, but slides are not essays, and often make little sense without the speaker.
I really like what Chris Hoff did in his blog post, “Security and Disruptive Innovation Part I: The Setup.”
I did something similar after “Security Breaches Are Good for You: My Shmoocon talk.” I posted a PDF of the slides. I think the PDF is less effective, because you can’t skim it, search it, or excerpt it as easily as with Hoff’s HTML version.
Nice work, Chris!
At the FIRST conference in Seville, Spain, I delivered a presentation about “Data on Data Breaches” that Adam and I put together. The slides, with the notes I made to act as “cue cards” for me, are available as a large PDF file on a slow web server.
The main points I tried to make are:
That with the availability of breach reports direct from states with central reporting, such as New York, it is possible to measure part of our ignorance when we rely solely on published breach reports — even the best available sources (such as Attrition’s DLDOS) undercount breaches dramatically, and are biased toward larger incidents.
That we are still at the leading edge of an explosion of information, and that we should not draw hasty conclusions until more facts are in.
That, as Emil Faber might put it, “Knowledge is Good” and is not that painful to provide.
And finally, primary materials such as breach reports are useful artifacts not only because they tell us dry facts in a standardized format (but that IS nice), but also because the notices themselves are interesting evidence of how firms talk to their customers about a difficult topic.
I’ll be writing more on this subject now that I have received the fourth batch of breach reports from my pals in New York, and my other pals in New Hampshire have made such materials available on-line.
Stuart E. Schechter, Rachna Dhamija, Andy Ozment, and Ian Fischer have written a paper which examines the behavior of persons doing on-line banking under various experimentally-manipulated conditions.
The paper is getting some attention, for example in the New York Times and at Slashdot.
What Schechter, et. al. find is that despite increasingly alarming indicators that something may be amiss, subjects frequently provided their passwords to an on-line banking site with which they were at least somewhat familiar. Absence of indicators that SSL is used, and absence of an image-based site authenticity indicator (such as SiteKey — although the authors do not mention which bank was involved in the study — are almost entirely ignored by subjects. Only a relatively dire IE7-style warning page seems to dissuade the subjects, and even then over a third logged in even when their real credentials, at their real bank, were involved.
The press is focusing on the Sitekey angle. The hook seems to be this: even when this highly-touted anti-phishing feature is absent (and a suspicious text box left in its place), people merrily supply their passwords. Therefore, Sitekey doesn’t help.
Another aspect of this study is worthy of note. One of the experimental treatments was whether subjects used their own account credentials, or whether — as instructed by the researchers — they played the role of a fictitious person using credentials supplied by the researchers (with and without a lecture about security).
Unshockingly enough, people behaved “more securely” (my words, not the study’s) when their real bank accounts were on the line.
So, even if we know that people act more securely when they have some skin in the game, how do we explain it when they nonetheless do seemingly dumb things?
This is where I want to see some follow-up work. If the Sitekey-style images aren’t there, and if people have been warned to look for them, what were they thinking when they just clicked on by? Why were they thinking that? Why weren’t they thinking precisely what they had been told to think — namely that this could be an attempt at fraud? When a blatant message was presented, the equivalent of a blinking neon sign, it helped, but why did a third of people disregard it? Did they read it? Was it “pop-up fatigue” at work? Do people not care about SSL indicators because they’ve seen one too many “secure login” pages that collect creds via HTTP-based forms and simply POST them via SSL? Is it that all this web security stuff is indistinguishable from magic (hard to believe of the young Harvard-area types that were the subjects of this study, but hey, maybe they were visiting from Somerville or Boston)?
These are important questions, and more and more is riding on them.
I haven’t seen any figures on losses due to phishing that I can remember offhand, but I strongly suspect that they are on the rise. Moreover, as operating systems and web browsers become more secure, it’s increasingly important for businesses like banks to understand the human side of these technologies because that’s where fraudsters will take aim. What people think when they interact with computers, the mental models they use, how they react to cues presented to them by applications and web sites, and how all of these mix with things they already know (or believe) about sites (“It must be reliable — it’s FooBarCoLand National Bank”) are things that will increase in importance.
I’m eager to learn more.
(Credit where credit’s due: 0, 1)