Submitted for your consideration

I added Bank Lawyer’s Blog to my set of RSS feeds some time ago, after I came across a decent post about ID theft there.
I provide — without comment — the following quotation from a banking industry lawyer, as posted yesterday:

Near the end of the Oscar-winning movie “Unforgiven,” the young assassin who calls himself “The Spencer Kid” is shaken by his first murder, and lamely tries to justify it with the assertion, “Well, I guess he had it coming.” Clint Eastwood’s character, William Munny, a man whose life of darkness is finally coming full circle, utters the benediction, “We’ve all got it coming, kid.”
May all of us get everything we have coming to us as a result of this latest crisis, bailout or no bailout.

Regulations, Risk and the Meltdown

There are obviously a large set of political questions around the 700+ billion dollars of distressed assets Uncle Sam plans to hold. If you care about the politics, you’re already following in more detail than I’m going to bother providing. I do think that we need to act to stem the crisis, and that we should bang out the best deal we can before the rest of the banks in the US come falling like dominos. As Bagehot said, no bank can withstand a crisis of confidence in its ability to settle. I think that knowing how distasteful and expensive it is, and with far better things to do with the $5,000 or so it will personally cost me as a taxpayer. (That $2,300 figure is per person.) I also think that knowing how poorly this administration has done in handling crisis from 9/11 to Katrina, and how poorly it does when forced to act in a moment of crisis. (Sandy Levinson has some interesting comments at “A further Schmittian (and constitutional?) moment.”) Finally, we are not bailing out the banks at the cost of a free market in banking. We gave up on a free market in banking in 1913 or so, after J.P. Morgan (not his eponymous bank) intervened to fix the crises of 1895 and 1907.

What I did want to look at was the phrase “more regulation,” and relate it a little to information security and risk management.

US banks are already intensely regulated under an alphabet soup of laws like SOX, GLB, USA PATRIOT and BSA. They’re subject to a slew of additional contractual obligations under things like PCI-DSS and BASEL rules on capital. And that’s leaving out the operational sand which goes by the name AML.

In fact, the alphabet soup has gotten so thick that there’s an acronym for the acronyms: GRC, or Governance, Risk and Compliance. Note that two of those three aren’t about security at all: they’re about process and laws. In the executive suite, it makes perfect sense to start security with those governance and compliance risks which put the firm or its leaders at risk.

There’s only so much budget for such things. After all, every dollar you spend on GRC and security is one that you don’t return to your shareholders or take home as a bonus. And measuring the value of that spending is notoriously hard, because we don’t share data about what happens.

Just saying that measurement is hard is easy. It’s a cop out. I have (macro-scale) evidence as to how well it all works:

  • Bear Stearns
  • Fannie Mae
  • Freddie Mac
  • Lehman Borthers
  • AIG
  • Washington Mutual
  • Wachovia
  • (Reserved)

I have a theory: in competition for budget within GRC, Governance and Compliance won. They had better micro-scale evidence as to their value, and that budget was funded before Risk was allowed to think deeply about risks.

There’s obviously immediate staunching to be done, but as we come out of that phase and start thinking about what regulatory framework to build, we need to think about how to align the interests of bankers and society.

If you’d like more on these aspects, I enjoyed Bob Blakley’s “Wall Street’s Governance and Risk Management Crisis” and
Nick Leeson, “The Escape of the Bankrupt” (via Not Bad for a Cubicle. Thurston points out the irony of being lectured by Nick “Wanna buy Barings?” Leeson.)

I’m not representing my co-author Andrew in any of this, but at least as I write this, his institution remains solvent.

Adam on CS TechCast

I did a podcast with Eric and Josh at CS Techcast. It was lots of fun, and is available now:
link to the show

Welcome to another podcast for IT professionals. This week we interview Adam Shostack, author of The New School of Information Security about the essentials IT organizations need to establish to really do security right.

The Podcast Awards nomination period closes soon, so get your votes in for CS Techcast at If you want to follow us on the social web check out or Otherwise, give us a ring or type up some feedback, all available at

And I thought I didn’t like Streisand

While Babs’ vocal stylings may be an “acquired taste”, today I have a new appreciation for the Streisand Effect.
Thanks to Slashdot, I learned that Thomson Reuters is suing the Commonwealth of Virginia alleging that Zotero, an open-source reference-management add-on for Firefox, contains features resulting from the reverse-engineering of Endnote, a competing commercial reference management product.
Turns out that while I am pretty happy with Bibdesk, it’s not the perfect solution for me. I had never heard of Zotero, so I downloaded it and played around. Color me impressed. If you are looking for a browser-integrated citation and reference management tool, I’d give Zotero a look.

Blaming the Victim, Yet Again

malware dialog box

John Timmer of Ars Technica writes about how we ignore dialog boxes in, “Fake popup study sadly confirms most users are idiots.”

The article reports that researchers at the Psychology Department of North Carolina State University created a number of fake dialog boxes had varying sorts of clues that they were not real dialog boxes, but sham ones. The sham dialog boxes had varying levels of visual clues to help the user think they were sham. One of the fake dialogs is here:

The conclusion of many people is summed up in the title of the Ars Technica — that people are idiots.

My opinion is that this is blaming the victim. Users are presented with such a variety of elements that it’s hard to know what’s real and what’s not. Worse, there are so many worthless dialogs that pop up during normal operation that we’re all trained to play whack-a-mole with them.

I confess to being as bad as anyone. My company has SSL set up to the mail server, but it’s a locally-generated certificate. So every time I fire up the mail client, there’s a breathless dialog telling me that the certificate isn’t a real certificate. Do you know what this has taught me? To be able to whack the okay button before the dialog finishes painting.

The idiots are the developers who give people worthless dialog boxes, who make it next to impossible to import in local certificates, who train people to just make the damned dialog go away.

Computing isn’t safe primarily because the software expects the user to be a continuously alert expert. If the users are idiots, it is only because they stand for this.

2008 Breaches: More or More Reporting?

Dissent has some good coverage of an announcement from the ID Theft Resource Center, “ITRC: Breaches Blast ’07 Record:”

With slightly more than four months left to go for 2008, the Identity Theft Resource Center (ITRC) has sent out a press release saying that it has already compiled 449 breaches– more than its total for all of 2007.

As they note, the 449 is an underestimate of the actual number of reported breaches, due in part to ITRC’s system of reporting breaches that affect multiple businesses as one incident. This year we have seen a number of such incidents, including Administrative Systems, Inc., two BNY Mellon incidents, SunGard Higher Education, Colt Express Outsourcing, Willis, and the missing GE Money backup tape that reportedly affected 230 companies. Linda Foley, ITRC Founder, informs this site that contractor breaches represent 11% of the 449 breaches reported on their site this year.

I don’t have much to add, but I do have a question: are incidents up, or are more organizations deciding that a report is the right thing to do?

[Update: I wanted to point out interesting responses by Rich Mogull and Dissent.]

The Discipline of “think like an attacker”

John Kelsey had some great things to say a comment on “Think Like An Attacker.” I’ve excerpted some key bits to respond to them here.

Perhaps the most important is to get the designer to stop looking for reasons attacks are impossible, and start looking for reasons they’re possible. That’s a pattern I’ve seen over and over again–smart people who really know their system also usually like their system, and want it to be secure. And so they spend a lot of time thinking about why their system is secure. “Nobody could steal our PIN because we encrypt it with triple-DES.”

So this is a great goal. I have two questions: first, is it reasonable? How many people can really step outside their design and regard it with a new perspective? How many people can then analyze the security of a system they’ve designed? (Is there a formal name for this problem? I call it ‘creator-blindness.’) I’m not sure exhorting people to think like an attacker helps. This problem isn’t unique to security, which brings me to my second question: is it effective? I was once taught to read my writing aloud as a way of finding mistakes. I teach people to diagram their system and then use a system we call “STRIDE per element” to help people look at it. By giving people a structure for analysis, we help them step outside of that creator frame.

A second goal of that “think like an attacker” exhortation is to get people to realize that, in order to know whether their system is secure, they need to learn something about what tools and resources an attacker is likely to have.

So, for a moment, let’s assume that this is a reasonable goal, and one we can expect every developer who hears the phrase to go pursue. Where do they go? How much time should they devote to it? Again, I’m not talking about the use of the phrase within the security engineering community, but in software engineering more generally. Secondly (again), there’s the question of “is this the most effective way to push people?”

Third, there’s a mindset of being an attacker. I don’t know how to teach that. It’s not just about intelligence–I’ve worked with stunningly brilliant people who don’t seem to have that mindset, and with people who are much less brilliant in that brute-force impressive brain sense, but who just seem to have the right kind of mind to break stuff.

Well, that I can’t argue with. All I’ll say is that we’ve been exhorting people to think like attackers for years, and it hasn’t helped.

I believe that security analysis is a skill which can be taught. The best have both talent and have worked to develop that talent. I hope and expect that we can figure out how to do so. Figuring that out will involve figuring out what pedagogic approaches have failed, so we can set them aside, and make room for experimentation, chaos, and — we hope — actual improvements. I believe that, when asked of non-security experts, the ‘think like an attacker’ is on that list of things we should set aside.

Finally, a side note on the title. If you’re indisciplined, feel free to skip to about 3:10.

TSA Badges

9Wants to Know has uncovered a new policy that allows airport screeners at Denver International Airport to bypass the same security screening checkpoints that passengers have to go through.

The new policy says screeners can arrive for work and walk behind security lines without any of their belongings examined or X-rayed.

At DIA, 9NEWS videotaped a dozen TSA screeners walk through a side gate and enter the sterile area of the airport carrying backpacks, purses and lunch boxes. Nothing was screened.

Sources tell 9Wants to Know, the reason for the security change may be tied to the new uniforms and badges.

The old, white TSA uniforms had yellow cloth badges sewn on them. The new, blue uniforms have metal badges that set off alarms when screeners go through the checkpoints. Sources say the TSA is worried that the screeners will remove the badges while going through security and that they’ll get lost or stolen. (Colorado’s, “Sirport Screeners bypassing security.”)

As Schneier points out, this isn’t a big deal:

Screeners have to go in and out of security all the time as they work. Yes, they can smuggle things in and out of the airport. But you have to remember that the airport screeners are trusted insiders for the system: there are a zillion ways they could break airport security.

But, as we pointed out when they moved to metal badges, TSA badges are a bad idea. There’s no reason to have metal badges at all, and they come at both a financial and operational cost. The operational cost is there’s a group of people walking through the metal detectors who are allowed to set it off.

Do they really need metal badges?

This is really about a failure of judgement, about thinking through the effects of their decisions, and about how those decisions will be perceived.

This Week in Petard-Hoisting, the Palin Edition


If you are the sort of person who looks at odd legal rulings and opinions, you may remember that a few years ago the US DOJ issued an opinion that stored emails are not protected under the Stored Communications Act. The DOJ reasoning is that when you leave read email on your server, it’s not a temporary copy that is needed for the communications (like a mail spool), and not a backup.

This reasoning is bizarre to people who use protocols like IMAP precisely as a backup. It’s also bizarre to people who wonder why the DOJ would argue that stored communications are not Stored Communications. Those people tend to think that perhaps this would mean that if those stored emails are not Stored, then it wouldn’t be illegal for the DOJ to just kindly request that copies of them be pulled from an ISP’s storage (as opposed to their Storage) and be handed over, just in case you’ve been doing whatever.

The EFF has posted an interesting opinion, one that points out that if stored email is not Stored, then the people who reset Sarah Palin’s password and read her email probably did not commit a crime under the DOJ’s own interpretations of the law.

There doesn’t seem to be much wrong with this reasoning. In any event, it’s going to make it hard to prosecute the miscreants, because they will have to explain to a judge why they changed their mind, or why there is one law for veep candidates and one or everyone else. Way to go, guys.

Whatever one’s opinion of Ms Palin, it’s hard to defend violating her privacy. Let’s hope this leads the DOJ to conclude that when you take communications and store them that they would be protected under the Stored Communications Act. As usual, the word is “oops.”

(Many people will note that there are undoubtably plenty of other laws to charge them under, starting with the Computer Fraud and Abuse Act. But any good prosecutor can find something to charge someone with. The point is about upholding and enforcing existing laws.)

Photo “Hockey Mom Makeover” by julie.anna.

University of Lake Wobegon?

Spaf has an excellent post up about Purdue’s decision to no longer be an NSA Center of Academic Excellence. He makes a number of thought-provoking points, among them that “excellence” loses its meaning if the bar is set too low, and that being an academic center and having a training (as opposed to educating) curriculum is a bit awkward. (These are my summaries of his views, obviously).
Spaf’s been doing top-caliber infosec work since many of us were wearing short pants and riding tricycles. His thoughts on this topic are well worth considering.

Think Like An Attacker?

One of the problems with being quoted in the press is that even your mom writes to you with questions like “And what’s wrong with “think like an attacker?” I think it’s good advice!”

Thanks for the confidence, mom!

Here’s what’s wrong with think like an attacker: most people have no clue how to do it. They don’t know what matters to an attacker. They don’t know how an attacker spends their day. They don’t know how an attacker approaches a problem. Telling people to think like an attacker isn’t prescriptive or clear. Some smart folks like Yoshi Kohno are trying to teach it. (I haven’t seen a report on how it’s gone.)

Even if Yoshi is succeeding, it’s hard to teach a way of thinking. It takes a quarter or more at a university. I’m not claiming that ‘think like an attacker’ isn’t teachable, but I will claim that most people don’t know how. What’s worse, the way we say it, we sometimes imply that you should be embarrassed if you can’t think like an attacker.

Lately, I’ve been challenging people to think like a professional chef. Most people have no idea how a chef spends their days, or how they approach a problem. They have no idea how to plan a menu, or how to cook a hundred or more dinners in an hour.

We need to give advice that can be followed. We need to teach people how to think about security. Repeating the “think like an attacker” mantra may be useful to a small class of well-oriented experts. For everyone else, it’s like saying “just ride the bike!” rather than teaching them step-by-step. We can and should do better at understanding people’s capabilities, giving them advice to match, and training and education to improve.

Understanding people’s capabilities, giving them advice to match and helping them improve might not be a bad description of all the announcements we made yesterday.

In particular, the new threat modeling process is built on something we expect an engineer will know: their software design. It’s a better starting point than “think like a civil engineer.”

[Update: See also my follow-up post, “The Discipline of ‘think like an attacker’.”]

SDL Press Tour Announcements

Steve Lipner and I were on the road for a press tour last week. In our work blog, he writes:

Last week I participated in a “press tour” talking to press and analysts about the evolution of the SDL. Most of our past discussions with press and analysts have centered on folks who follow security, but this time we also spoke with publications and analysts who write for software development organizations. I was struck by the extent to which the folks who focus on development have been grappling with many of the issues about developing secure software that we’ve focused on here at Microsoft.

The announcements are here. I am particularly excited about the third announcement, the availability of the SDL Threat Modeling Tool v3.

Applied Security Visualization

Our publisher sent me a copy of Raffael Marty‘s Applied Security Visualization. This book is absolutely worth getting if you’re designing information visualizations. The first and third chapters are a great short intro into how to construct information visualization, and by themselves are probably worth the price of the book. They’re useful far beyond security. The chapter I didn’t like was the one on insiders, which I’ll discuss in detail further in the review.

In the intro, the author accurately scopes the book to operational security visualization. The book is deeply applied: there’s a tremendous number of graphs and the data which underlies them. Marty also lays out the challenge that most people know about either visualization or security, and sets out to introduce each to the other. In the New School of Information Security, Andrew and I talk about these sorts of dichotomies and the need to overcome them, and so I really liked how Marty called it out explicitly. One of the challenges of the book is that the first few chapters flip between their audiences. As long as readers understand that they’re building foundations, it’s not bad. For example, security folks can skim chapter 2, visualization people chapter 3.

Chapter 1, Visualization covers the whats and whys of visualization, and then delves into some of the theory underlying how to visualize. The only thing I’d change in chapter 1 is a more explicit mention of Tufte’s small multiples idea. Chapter 2, Data Sources, lays out many of the types of data you might visualize. There’s quite a bit of “run this command” and “this is what the output looks like,” which will be more useful to visualization people than to security people. Chapter 3, Visually Representing Data covers the many types of graphs, their properties and when they’re approprite. He goes from pie and bar charts to link graphs, maps and tree maps, and closes with a good section on choosing the right graph. I was a little surprised to see figure 3-12 be a little heavy on the data ink (a concept that Marty discusses in chapter 1) and I’m confused by the box for DNS traffic in figure 3-13. It seems that the median and average are both below the minimum size of the packets. These are really nits, it’s a very good chapter. I wish more of the people who designed the interfaces I use regularly had read it. Chapter 4, From Data to Graphs covers exactly that: how to take data and get a graph from it. The chapter lays out six steps:

  1. Define the problem
  2. Assess Available Data (I’ll come back to this)
  3. Process Information
  4. Visual Transformation
  5. View Transformation
  6. Interpret and Decide

There’s also a list of tools for processing data, and some comparisons. Chapter 5, Visual Security Analysis covers reporting, historical analysis and real time analysis. He explains the difference, when you use each, and what tools to use for each. Chapter 6, Perimeter Threat covers visualization of traffic flows, firewalls, intrusion detection signature tuning, wireless, email and vulnerability data. Chapter 7, Compliance covers auditing, business process management, and risk management. Marty makes the assumption that you have a mature risk management process which produces numbers he can graph. I don’t suppose that this book should go into a long digression on risk management, but I question the somewhat breezy assumption that you’ll have numbers for risks.

I had two major problems with chapter 8, Insider Threat. The first is claims like “fewer than half (according to various studies) of various studies involve sophisticated technical means” (pg 387) and “Studies have found that a majority of subjects who stole information…” (pg 390) None of these studies are referenced or footnoted, and this in a book that footnotes a URL for sendmail. I believe those claims are wrong. Similarly, there’s a bizarre assertion that insider threats are new (pg 373). I’ve been able to track down references to claims that 70% of security incidents come from insiders back to the early 1970s. My second problem is that having mis-characterized the problem, Marty presents a set of approaches which will send IT security scurrying around chasing chimeras such as “printing files with resume in the name.” (This because a study claims that many insiders who commit information theft are looking for a new job. At least that study is cited.) I think the book would have been much stronger without this chapter, and suggest that you skip it or use it with a strongly questioning bias.

Chapter 9, Data Visualization Tools is a guided tour of file formats, free tools, open source libraries, and online and commercial tools. It’s a great overview of the strengths and weaknesses of tools out there, and will save anyone a lot of time in finding a tool to meet various needs. The Live CD, Data Analysis and Visualization Linux can be booted on most any computer, and used to experiment with the tools described in chapter 9. I haven’t played with it yet, and so can’t review it.

I would have liked at least a nod to the value of comparative and baseline data from other organizations. I can see that that’s a little philosophical for this book, but the reality is that security won’t become a mature discipline until we share data. Some of the compliance and risk visualizations could be made much stronger by drawing on data from organizations like the Open Security Foundation’s Data Loss DB or the Verizion Breaches Report.

Even in light of the criticism I’ve laid out, I learned a lot reading this book. I even wish that Marty had taken the time to look at non-operational concerns, like software development. I can see myself pulling this off the shelf again and again for chapters 3 and 4. This is a worthwhile book for anyone involved in Applied Security Visualization, and perhaps even anyone involved in other forms of technical visualization.

More on Confirmation Bias

Devan Desai has a really interesting post, Baffled By Community Organizing:

First, it appears that hardcore left-wing and hardcore right-wing folks don’t process new data. An fMRI study found that confirmation bias — “whereby we seek and find confirmatory evidence in support of already existing beliefs and ignore or reinterpret disconfirmatory evidence” — is real. The study explicitly looked at politics…

What can I say? Following up on my post, “Things Only An Astrologist Could Believe,” I’m inclined to believe this research.