Towards an Economic Analysis of Disclosure

In comments on a my post yesterday, “I Am So A Dinosaur“, Ian asks “Has anyone modelled in economics terms why disclosure is better than the alternate(s) ?” I believe that the answer is no, and so will give it a whack. The costs I see associated with a vulnerability discovery and disclosure, in chronological order are:

  1. The cost borne by a researcher who finds a vulnerability. This may be the time of a student, or it could be a fiscal cost borne by a company like NGS or eEye. Laws such as DMCA drive these costs up greatly. There is a subset of this cost, which is that good disclosure costs the reporter more. Good disclosure here includes testing on a variety of platforms, figuring out workarounds, and documenting the attack thoroughly.
  2. The set of costs incurred by the software maintainers. Every time a vulnerability is discovered, someone needs to evaluate it, and decide if it is worth the expense of writing, testing, and distributing a patch.
  3. Costs distributed amongst a great many users include learning about the vulnerability, and perhaps the associated patch, deciding if it matters to them, testing it, rating the urgency of the patch versus the business risks associated with a change to operational system. [If the users don’t make that investment, or make poor decisions, there’s a cost of being broken into, and recovery from the problem. (Thanks, Eric!)] Highly skilled end users may want to test a vulnerability in their environment. Full disclosure helps this testing. Good disclosure from the researcher also helps hold down costs here. Since there are lots of users of most such software, savings multiply greatly.
  4. Costs distributed amongst a smaller group of security software authors include understanding the vulnerability, building or getting exploit code, and adding functionality to their products to “handle” the vulnerability, either scanning for it, or detecting the attack signature. Where these vendors have to write their own exploit code, they will be slower to get their tool to customers. These costs are sometimes lower for vendors of closed source toolsets who can encode the information in a binary, and thus get it under NDA.
  5. Costs to one or more attackers to learn about the vulnerability; decide if they want to use it in an attack; code or improve the code for the attack; deploy the attack.
  6. Costs to academic researchers are separated here because academics are less time sensitive than security vendors. I can invent and test a tool to block buffer overflows with a 30 day old exploit as well as with completely fresh exploits. Academic researchers need high quality exploit code, but they don’t need it quickly.

I think that all responsible disclosure policies attempt to balance these costs. Some attackers don’t disclose at all; they invest in finding and using exploits, and hope that they have a long shelf life. (I started to say malicious attackers, but both government researchers and criminals fail to disclose.)

Ideally, we’d drive up attacker costs while holding down all of end-user, security vendor, and academic costs. (One of my issues with the OIS guidelines is that they give too little to the academic world. They could easily have said ‘responsible disclosure ends 90 days after a patch release with the release of exploit code and test cases.’)

So, Ian, I hope you’re happy–you’ve distracted me from the stock market question.

[Update: Reader Chris Walsh points to a paper, “Economic Analysis of the Market for Software Vulnerability Disclosure,” which takes these ideas and does the next step of economic analysis, as well as a presentation that some of the authors gave at Econ & Infosec.]

I Am So A Dinosaur…

…and I was one before it was cool. Crit Jarvis responds to my comment that my views on disclosure have ossified by claiming that I’m evolving. The trouble is, I have documented proof it’s not true. From my homepage:

Apparent Weaknesses in the Security
Dynamics Client Server Protocol.
This paper was presented at the
DIMACS workshop on Network
, and describes a substantial weakness in the Security
Dynamics client server model, which was apparently fixed in versions
of the software later than the ones I was working with. Security Dynamics responded to my work before publication. I’m very pleased that they will be publishing their protocols in the future. The postscript file submitted to DIMACS is available, as is an html version, but the html version is missing two diagrams.

The DIMACS workshop was Dec 4-6, 1996. I spoke to some folks about the flaw at Crypto, in Santa Barbara that summer, and they encouraged me to publish. I spent a while talking to a lawyer about the issues, concerned that I might be sued, and pulled source code for the F2 hash from the paper. I contacted Security Dynamics only after the paper had gone to press, to make it harder for them to pressure me to pull it. It turned out that John Brainard and Vin McLellan were utter gentlemen in dealing with me, and SDI never brandished a threatening word. But in the word of vulnerability disclosure back then, I didn’t think I was being unreasonably fearful.

The landscape is somewhat different today (although Guillermito* would doubtless beg to differ, as would Niels Ferguson ). Companies, by and large, seem to be responding better to security reports. (I know someone whose bug report sat at Sun and CERT for a full two years in the early 90s, despite rampant evidence of attacker use.) But my position is about the same. Over time, disclosure is better than trying to sweep vulnerabilities under the rug. We should tweak to minimize the current pain that disclosure entails.

*(via Freedom To Tinker Clips)

Patterns of Conflict, Easier on the Eyes

I’ve been posting a fair bit about Boyd. Boyd’s wrote very little. Most of his communication was in the form of briefs. At least two of you have publicly admitted to getting the slides, and, if you’re like me, struggled with the form of the presentation: A scan of a typed, hand-annotated presentation book. There’s a new Powerpoint version available, edited by Chet Richards and Chuck Spinney, and produced and designed by Ginger Richards. It’s far easier on the eyes. There are a few places where the presentation is unfortunately dense 8 point type, but that’s the breakdown in what Boyd wrote.

More on Do Security Breaches Matter?

In responding to a question I asked yesterday, Ian Grigg writes:

In this case, I think the market is responding to the unknown. In other words, fear. It has long been observed that once a cost is understood, it becomes factored in, and I guess that’s what is happening with DDOS and defacements/viruses/worms. But large scale breaches of confidentiality are a new thing. Previously buried, they are now surfaced, and are new and scary to the market.

I like the idea that these are new and scary. Unfortunately, we can’t tell if this matches the data. In a 2004 paper, “Effect of Internet Security Breach Announcements on Market Value of Breached Firms and Internet Security Developers”, Cavusoglu argues that the market cap drop is 2.1% within 2 days. (Unfortunately no longer online, but mentioned in his paper the Camp-Lewis book.) So if Campbell et al found a 5% drop, then is the market is punishing companies more? Who did their research first? What was the time period studied? We can’t tell without both papers being available.

Otherwise I have a problem with a 5% drop in value. How is it that confidentiality is worth 5% of a company? If that were the case, companies like DigiCash and Freedom [Zero-Knowledge?] would have scored big time, but we know they didn’t. Confidentiality just isn’t worth that much, ITMO (in the market’s opinion).

I don’t agree with this analysis. I’ve argued elsewhere (Will People Ever Pay For Privacy?) that privacy is a hard product to sell. Confidentiality could be worth 5% of a company in a lawsuit, especially if the breach causes clear harm (as in the Amy Boyer case. I’m hard pressed to argue that the market’s response is accurate and generalizable, but I expect tort law will evolve rapidly here, and in the absence of certainty, the market will extract a risk premium.

Small Bits of Chaos: Blind overflows, National ID, and Looney Tunes

SecurityFocus has a new article on blind buffer overflows. I’m glad these techniques are being discussed in the open, rather than in secret.

Julian Sanchez has the perfect comment on Congressman Dreier’s new national ID plan, at Hit & Run.

And finally, don’t visit this Looney Tunes site if you’re busy. (Via Steven Horowitz at Liberty and Power).

Do Security Breaches Matter?

Nick Owen posts about the stock valuation impact of security breaches.

This UMD study found that a firm suffering a breach of ‘confidential information’ saw a 5% drop in stock price while firms suffering a non-confidential breach saw no impact.

I read it as the market over time learning the difference between a DOS attack and the posting of customer’s credit cards online. Which is interesting, because the market will be most forgiving of the attacks that are the most basic to prevent (web defacement, viruses & worms) or which are ‘unpreventable’ (DOS attacks – unpreventable isn’t the 100% correct word, but you know what I mean) and it will punish you severely (a 5% market cap drop according to the UMD study) for succumbing to a more viscous, targeted attack that results in exposure of confidential information such as customer credit cards. So are you putting your money in the right places?

I read this slightly differently. I think the market doesn’t care about attacks that don’t cost money. I think the market doesn’t really care about breaches of confidentiality, except when there’s a risk of lawsuit or customers leaving. And that means that when the market gets a whiff of the new attackers, these market impacts are going to go up.

Catastrophe and Continuation

Dr. David Ozonoff, a professor of environmental health at the Boston University School of Public Health who originally supported the new laboratory but now opposes it, argues that biodefense spending has shifted money away from “bread-and-butter public health concerns.” Given the diversion of resources and the potential for germs to leak or be diverted, he said, “I believe the lab will make us less safe.”

So says this article in the New York Times. It’s worth reading, as a discussion of bioterrorism and the funding around it. But perhaps more importantly, its worth reading as an analysis of the costs of a war on terror. Its worth reading as we look at how the government is making tradeoffs: Building new national labs, versus dealing with ongoing problems. Are we making the right tradeoffs as we drive people away from aircraft? Fingerprint visitors from our allied countries? (On the way in, but not out)? Accept a little bit of torture to try to avert an attack, while losing sight of the moral dimension of conflict?

I believe that we need to align our government to defeat the radical Islamic terror threat, but the way to do that isn’t new labs, its the everyday things. For example, consider a fire, apparently set by a homeless fellow in a subway switching station. It may take the New York Subway system 3 to 5 years to recover.

California Privacy Law

CIO Magazine has an article “Riding The California Privacy Wave,” reviewing California’s new and pending privacy laws. There’s bits I wasn’t aware of, such as SB 186 168, preventing “businesses from using California residents’ Social Security numbers as unique identifiers.” There’s a slew of new laws in California, a great many of which affect IT operations and choices if you have California customers, even without a ‘business presence’ there.

Update: Fixed bill number. Thanks (again) Mort.

Economics of Taxonomies

In his latest post on folksonomies, Clay argues that we have no choice about moving to folksonomies, because of the economics. I’d like to tackle those economics a bit.

(Some background: There was recently a fascinating exchange between Clay Shirky and Louis Rosenfeld on the subject of taxonomies versus “folksonomies,” lightwieght, uncontrolled terms that users attach to things as classification. Now, as the name of my blog implies, I’m all in favor of such emergent and chaotic phenomenon as folksonomies. At the same time, some of the work I’m doing may involve the creation of a taxonomy. Worse, its a taxonomy where the items being classified are subject to a great many potential classifications, and really, a folksonomy may well be a better choice. So how to decide where to go?)

I don’t think that there is a single economics of taxonomies. We could compare effort of creation to effort of use. Flickr users create a folksonomy because its trivial to create, and the work needed to use it for tagging is also low. In contrast, the Linean taxonomy of life is the subject of a huge amount of work.
Once you’ve learned to use both Flickr and the plethora of modern library systems to search, the effort to search the Flickr site is higher than the effort to search in a library. So Flickr (and perhaps all folksonomies) offload costs from classifiers to searchers.

There’s also an economic question of the cost of failure. Flickr is not there to help you find precisely the photo you’re looking for, nor the paper or book you mean to find. It’s there to make surfing easier. If you want to see specific people’s photos, you can subscribe to their site. So the folksonomy works where there’s a very low cost of not seeing a result. Does it work as well where the costs are higher? If you’re searching for a specific book in a library, and can’t guess the tags attached to it, you can fall back to other, organized search criteria. I’m finding it hard to quantify the search failure costs here, because moving from photos to say, reference specimens of butterflys, that specimen, and its name, act as an index into all sorts of scientific work.

Another tension is speed of change. Fast changing taxa are hard to search, but easy to create. Is it worthwhile to spend the effort to enable effective searching? To whom is it worthwhile?

To relate this back to the work I’m doing, I think that the cost of failed searches may be very high. High enough to dominate? Unclear.

Application Layer Vulnerability, an Orientation Issue

Richard Bejtlich comments on a new “@RISK: The Consensus Security Alert“, which starts:
“Prediction: This is the year you will see application level attacks mature and proliferate.” He says:

You might say that my separation of OS kernel and OS applications doesn’t capture the spirit of SANS’ “prediction.” You might think that their new warning means we should focus on applications that don’t ship with the “OS.” In other words, look at widely deployed applications that aren’t bundled with an OS installation CD. Using that criteria, “application attacks” are still old news.

I think that Richard is both right, in that there’s no big technical shift, and wrong, in that the attacks will mature. As I said a few days ago, the attackers will become more clever in using the attacks to make money. There’s also a perception issue, a blowback, if you will, of the success of database-driven vulnerability scanners like ISS and Nessus. These scanners are very effective at finding instances of the sorts of vulnerabilities that get CVE entries. They are less effective, if they even try, at finding vulnerabilities in your locally developed application. Here tools like those from Kavado and SPI Dynamics do much better. Rather than working from a database of flaws, they inspect a web application for classes of flaw, by running attacks against the site in a controlled way. So the success of the database-driven scanners is that people think that they can run those scanners and learn how an attacker can get in. And that’s correct. But no tool will give you a complete list. And so I expect that what the SANS folks are talking about is a rise in attacks against the business infrastructure, rather than the technical infrastructure. If they’re not, they should be.

All Good Things Must End

Phrackstaff is pleased to bring you _our_ LAST EVER CALL FOR PAPERS for

Since 1985, PHRACK MAGAZINE has been providing the hacker community with
information on operating systems, network technologies and telephony, as
well as relaying features of interest for the international computer
underground. PHRACK MAGAZINE is made available to the public, as often as
possible, free of charge.

The final call for papers. Phrack mixed the technical, social and political with incredible disdain for their readers and wannabes. Phrack published Aleph One’s Smashing The Stack For Fun and Profit, which may be one of the most influential security papers written. Aleph didn’t invent stack smashing attacks, but he made them understandable. Phrack published Dorothy Denning’s Concerning Hackers Who Break into Computer Systems. Phrack taught you to hack phones (wired, cellular, or voice mail), networks (appleshare, Novell, MilNet, TCP/IP), operating systems (VMS or NT or IOS), radios, casinos, and your local McDonald’s.

Phrack got a lot of the very best writing that hackers produced. It was an important carrier and arbiter for the hacker orientation. But publication slowed from around once a month at one point, to maybe every six months as the world changed. I look forward to being slammed in l00pback for being silly and sentimental.

CCS Industry Track

I’m excited to be a part of the ACM’s 2005 Computer and Communication Security Conference, which has an Industry Track this year. We’re working to foster more interplay and collaboration between industry, the public sector, and academia:

The track aims to foster tighter interplay between the demands of real-world security systems and the efforts of the research community. Audience members would like to learn about pressing security vulnerabilities and deficiencies in existing products and Internet-facing systems, and how these should motivate and shape research programs. Presentation of crisply framed, open technical problems and discussion of innovative solutions to real-world problems will be especially valuable. Also of interest are: Practical and broadly informative experience with the security aspects of large-scale systems, reports on the scope and content of sponsored research programs in information security, and government or commercial requirements for future systems. Technical characteristics of novel products may be of interest, but marketing pitches are verboten.