One of the problems with being quoted in the press is that even your mom writes to you with questions like “And what’s wrong with “think like an attacker?” I think it’s good advice!”
Thanks for the confidence, mom!
Here’s what’s wrong with think like an attacker: most people have no clue how to do it. They don’t know what matters to an attacker. They don’t know how an attacker spends their day. They don’t know how an attacker approaches a problem. Telling people to think like an attacker isn’t prescriptive or clear. Some smart folks like Yoshi Kohno are trying to teach it. (I haven’t seen a report on how it’s gone.)
Even if Yoshi is succeeding, it’s hard to teach a way of thinking. It takes a quarter or more at a university. I’m not claiming that ‘think like an attacker’ isn’t teachable, but I will claim that most people don’t know how. What’s worse, the way we say it, we sometimes imply that you should be embarrassed if you can’t think like an attacker.
Lately, I’ve been challenging people to think like a professional chef. Most people have no idea how a chef spends their days, or how they approach a problem. They have no idea how to plan a menu, or how to cook a hundred or more dinners in an hour.
We need to give advice that can be followed. We need to teach people how to think about security. Repeating the “think like an attacker” mantra may be useful to a small class of well-oriented experts. For everyone else, it’s like saying “just ride the bike!” rather than teaching them step-by-step. We can and should do better at understanding people’s capabilities, giving them advice to match, and training and education to improve.
Understanding people’s capabilities, giving them advice to match and helping them improve might not be a bad description of all the announcements we made yesterday.
In particular, the new threat modeling process is built on something we expect an engineer will know: their software design. It’s a better starting point than “think like a civil engineer.”
[Update: See also my follow-up post, "The Discipline of 'think like an attacker'."]
Steve Lipner and I were on the road for a press tour last week. In our work blog, he writes:
Last week I participated in a “press tour” talking to press and analysts about the evolution of the SDL. Most of our past discussions with press and analysts have centered on folks who follow security, but this time we also spoke with publications and analysts who write for software development organizations. I was struck by the extent to which the folks who focus on development have been grappling with many of the issues about developing secure software that we’ve focused on here at Microsoft.
The announcements are here. I am particularly excited about the third announcement, the availability of the SDL Threat Modeling Tool v3.
Our publisher sent me a copy of Raffael Marty‘s Applied Security Visualization. This book is absolutely worth getting if you’re designing information visualizations. The first and third chapters are a great short intro into how to construct information visualization, and by themselves are probably worth the price of the book. They’re useful far beyond security. The chapter I didn’t like was the one on insiders, which I’ll discuss in detail further in the review.
In the intro, the author accurately scopes the book to operational security visualization. The book is deeply applied: there’s a tremendous number of graphs and the data which underlies them. Marty also lays out the challenge that most people know about either visualization or security, and sets out to introduce each to the other. In the New School of Information Security, Andrew and I talk about these sorts of dichotomies and the need to overcome them, and so I really liked how Marty called it out explicitly. One of the challenges of the book is that the first few chapters flip between their audiences. As long as readers understand that they’re building foundations, it’s not bad. For example, security folks can skim chapter 2, visualization people chapter 3.
Chapter 1, Visualization covers the whats and whys of visualization, and then delves into some of the theory underlying how to visualize. The only thing I’d change in chapter 1 is a more explicit mention of Tufte’s small multiples idea. Chapter 2, Data Sources, lays out many of the types of data you might visualize. There’s quite a bit of “run this command” and “this is what the output looks like,” which will be more useful to visualization people than to security people. Chapter 3, Visually Representing Data covers the many types of graphs, their properties and when they’re approprite. He goes from pie and bar charts to link graphs, maps and tree maps, and closes with a good section on choosing the right graph. I was a little surprised to see figure 3-12 be a little heavy on the data ink (a concept that Marty discusses in chapter 1) and I’m confused by the box for DNS traffic in figure 3-13. It seems that the median and average are both below the minimum size of the packets. These are really nits, it’s a very good chapter. I wish more of the people who designed the interfaces I use regularly had read it. Chapter 4, From Data to Graphs covers exactly that: how to take data and get a graph from it. The chapter lays out six steps:
- Define the problem
- Assess Available Data (I’ll come back to this)
- Process Information
- Visual Transformation
- View Transformation
- Interpret and Decide
There’s also a list of tools for processing data, and some comparisons. Chapter 5, Visual Security Analysis covers reporting, historical analysis and real time analysis. He explains the difference, when you use each, and what tools to use for each. Chapter 6, Perimeter Threat covers visualization of traffic flows, firewalls, intrusion detection signature tuning, wireless, email and vulnerability data. Chapter 7, Compliance covers auditing, business process management, and risk management. Marty makes the assumption that you have a mature risk management process which produces numbers he can graph. I don’t suppose that this book should go into a long digression on risk management, but I question the somewhat breezy assumption that you’ll have numbers for risks.
I had two major problems with chapter 8, Insider Threat. The first is claims like “fewer than half (according to various studies) of various studies involve sophisticated technical means” (pg 387) and “Studies have found that a majority of subjects who stole information…” (pg 390) None of these studies are referenced or footnoted, and this in a book that footnotes a URL for sendmail. I believe those claims are wrong. Similarly, there’s a bizarre assertion that insider threats are new (pg 373). I’ve been able to track down references to claims that 70% of security incidents come from insiders back to the early 1970s. My second problem is that having mis-characterized the problem, Marty presents a set of approaches which will send IT security scurrying around chasing chimeras such as “printing files with resume in the name.” (This because a study claims that many insiders who commit information theft are looking for a new job. At least that study is cited.) I think the book would have been much stronger without this chapter, and suggest that you skip it or use it with a strongly questioning bias.
Chapter 9, Data Visualization Tools is a guided tour of file formats, free tools, open source libraries, and online and commercial tools. It’s a great overview of the strengths and weaknesses of tools out there, and will save anyone a lot of time in finding a tool to meet various needs. The Live CD, Data Analysis and Visualization Linux can be booted on most any computer, and used to experiment with the tools described in chapter 9. I haven’t played with it yet, and so can’t review it.
I would have liked at least a nod to the value of comparative and baseline data from other organizations. I can see that that’s a little philosophical for this book, but the reality is that security won’t become a mature discipline until we share data. Some of the compliance and risk visualizations could be made much stronger by drawing on data from organizations like the Open Security Foundation’s Data Loss DB or the Verizion Breaches Report.
Even in light of the criticism I’ve laid out, I learned a lot reading this book. I even wish that Marty had taken the time to look at non-operational concerns, like software development. I can see myself pulling this off the shelf again and again for chapters 3 and 4. This is a worthwhile book for anyone involved in Applied Security Visualization, and perhaps even anyone involved in other forms of technical visualization.
Devan Desai has a really interesting post, Baffled By Community Organizing:
First, it appears that hardcore left-wing and hardcore right-wing folks don’t process new data. An fMRI study found that confirmation bias — “whereby we seek and find confirmatory evidence in support of already existing beliefs and ignore or reinterpret disconfirmatory evidence” — is real. The study explicitly looked at politics…
What can I say? Following up on my post, “Things Only An Astrologist Could Believe,” I’m inclined to believe this research.
Bletchley Park, the site in the UK where WWII code-breaking was done, has a computing museum. The showpiece of that museum is Colossus, one of world’s first computers. (If you pick the right set of adjectives, you can say “first.” Those adjectives are apparently, “electronic” and “programmable.”) It has been rebuilt over the last fourteen years by a dedicated team, who have managed to figure out how it was constructed despite all the plans and actual machines having been dismantled.
Of course, keeping such things running requires cash, and Bletchley Park has been scrambling for it for years now. The BBC reports that IBM and PGP have started a consortium of high-tech companies to help fund the museum, starting with £57,000 (which appears to be what the exchange rate is on $100,000). PGP has also set up a web page for contributions through PayPal at http://www.pgp.com/stationx, and if you contribute at least £25 (these days actually less than $50), you get a limited-edition t-shirt complete with a cryptographic message on it.
An interesting facet of the news is that Bletchley Park is a British site and the companies starting this funding initiative are each American companies. Additionally, while PGP is an encryption company and thus has a connection to Bletchley Park as a codebreaking organization, one of the major points that PGP and IBM are making is that Bletchley Park is indeed a birthplace (if not the birthplace) of computing in general.
This is an interesting viewpoint, particularly if you consider the connection of Alan Turing himself. Turing’s impact on computing in general is more than his specific contributions to computers — he was a mathematician far more than an engineer. He was involved in designing Colossus, but the real credit goes to Tommy Flowers, who actually built the thing.
If we look at the history of computing, an interesting thing seems to have happened. The Allies built Colossus during the war, and then when the war ended agreed to forget about it. The Colossi were all smashed, but many people involved went elsewhere and took what they learned from Colossus to make all the early computers that seemed to have names that end in “-IAC.”
(A major exception is the work of Konrad Zuse, who not only built mechanical programmable computers before these electronic ones, but some early electronic ones, as well.)
This outgrowth from Colossus also seems to include the work that turned IBM from being a company that primarily made punched cards and typewriters to one that made computers. It is thus nice to see IBM the computing giant pointing to Colossus and Bletchley as a piece of history worth saving along with the cryptographers at PGP. It is their history, too.
I think this dual parentage makes Bletchley Park doubly worth saving. The information economy has computers and information security at its core, and Colossus sits at the origins of both. Please join us in helping save the history of the information society.
Dear Mr Harper,
In general people do not care for the government to be tracking their religious affiliation. In particular however, there are few groups who care less for this sort of tracking than Jews. Seriously, you’re not going to get votes by sending Rosh Hashanah cards to your Jewish constituents. It freaks us out, really.
I was a little alarmed at the idea that the government might have some list of Canadian Jews, whether or not they’re using that for benevolent or malevolent or cynical reasons,” Mr. Terkel said. “It doesn’t seem my religion should be the business of any federal government.
Or is that vice-versa? A few weeks ago, Security Retentive posted about an article in the Economist: “Confessions of a Risk Manager”. Both his analysis and the original story are quite interesting and I encourage you to read them as well as a letter to the editor that was published in last week’s print edition of the Economist. In “Risky Business”, David Howat, a self described past risk manager share his thoughts on the roles of risk managers:
Risk managers can’t do a proper job if they aren’t part of the team that develops the proposal. They are enablers, not gatekeepers: their job is to ensure that each new transaction, product and service is developed with safety as well as profitability in mind. Weaknesses need to be identified early so that, if they can’t be corrected, the proposal can be dropped before anyone gets too attached to it.
Sounds familiar doesn’t it? I can’t count the number of times I’ve used a similar argument for security being involved from the beginning. It’s heartbreaking to hear that an industry that’s been around much longer then ours is still fighting the same battles. Yet on the plus side, it’s yet another group that we can learn from to improve our own stance and hopefully avoid making some of the same mistakes. Time to go re-read the original article again.
Over at the Burton Identity and Privacy Strategies blog, there’s a post from Ian Glazer, “Trip report from the Privacy Symposium,” in which he repeats claims from Jeff Rosen:
I got to hear Jeffery Rosen share his thoughts on potential privacy “Chernobyls,” events and trends that will fundamentally alter our privacy in the next 3 to 10 years.
I don’t believe it, and haven’t believed it in a long time. As I said in 2006, There Will Be No Privacy Chernobyl. There’s too much habituation, too much disempowerment, and too diffuse an impact of any given issue.
I’d love to have to eat those words. Rosen suggests five issues:
- Targeted ads
- Search term links
- The Star Wars kid
- Ubiquitous surveillance
Do you see any of these rising to the level of Chernobyl? Where you could stop the average person on the street in most of the developed world, ask a simple question, and not get a blank stare?
There’s a really funny post on a blog titled “Affordable Indian Astrology & Vedic Horoscope Provider:”
Such a choice of excellent Muhurta with Chrome release time may be coincidental, but it makes us strongly believe that Google may not have hesitated to utilize the valuable knowledge available in Vedic Astrology in decision making.
This is a beautiful example of confirmation bias at work. Confirmation bias is when you believe something (say, Vedic astrology) and go looking for confirmation. This doesn’t advance your knowledge in any way. You need to look for contradictory evidence. For example, if you think Google is using Vedic astrology, they have a decade of product launches with some obvious successes. Test the idea. I strongly believe that you haven’t.