Folksonomies, Tested

I’ve just stumbled across this abstract comparing full-test searching to controlled vocabulary searching. The relevance to Clay’s posts on controlled vocabularies is that our intuitive belief that controlled vocabulary helps searching may be wrong. Unfortunately, the full paper is $30–perhaps someone with an academic library can comment.

…In this paper, we focus on an experiment in which different component indexing and retrieval methods were tested. The results are surprising. Earlier work had often shown that controlled vocabulary indexing and retrieval performed better than full-text indexing and retrieval…, but the differences in performance were often so small that some questioned whether those differences were worth the much greater cost of controlled vocabulary indexing and retrieval … In our experiment, we found that full-text indexing and retrieval of software components provided comparable precision but much better recall than controlled vocabulary indexing and retrieval of components. There are a number of explanations for this somewhat counter-intuitive result, including the nature of software artifacts, and the notion of relevance that was used in our experiment. We bring to the fore some fundamental questions related to reuse repositories.

Small Bits of Chaos: Brazilian Democracy, Traffic Cameras, Locks, Hamas, and Curtains

Lessig discusses what democracy looks like in Brazil:

I remember reading about Jefferson’s complaints about the early White House. Ordinary people would knock on the door, and demand to see the President. Often they did. The presumption of that democracy lives in a sense here. And you never quite see how far from that presumption our democracy has become until you see it, live, here. “This is what democracy looks like.” Or at least, a democracy where the leaders can stand packed in the middle of a crowd, with protesters yelling angry criticism yet without “security” silencing the noise. No guns, no men in black uniform, no panic, and plenty of press. Just imagine.


Further analysis indicated that the cameras are contributing to a definite increase in rear-end crashes, a possible decrease in angle crashes, a net decrease in injury crashes attributable to red light running, and an increase in total injury crashes.

(From “Evaluation of Red Light Camera Enforcement Programs in Virginia“, (PDF) The Virginia Transportation Research Council, 1/27/2005, via “thenewspaper.com” (A journal of the politics of driving, hmmm?) covered in techdirt, and linked to by Freedom to Clip, and the cite is longer than the blurb!)


Wired on lockpicking contests.


Hamas wins in a landslide, promising to fight corruption, Jews. Less sarcastically, their strategy is classic guerilla/insurgent ‘align with the people against a corrupt government.’ It’s not clear that Abbas can counter this. Slide 108 of Patterns of Conflict starts:

Undermine guerrilla cause and destroy their cohesion by demonstrating integrity and competence of government to represent and serve needs of people—rather than exploit and impoverish them for the benefit of a greedy elite.


Finally, in a subject near and dear to me (bullet 6), David Akin reports that you have privacy even without curtains.

“The Arthur Andersen Of Banking?”

Over at The CounterTerrorism Blog, Andrew Cochran accuses Riggs Bank of being “the Arthur Andersen of banking.” Riggs is apparently pleading guilty to violating the Bank Secrecy Act, by “failing to file reports to regulators on suspicious transfers and withdrawals by clients.”

I’d like to address the comparison to Arthur Andersen, and through that lens, look at the Orwellian nature of US bank secrecy laws, which actively require banks to spy on their customers. Arthur Anderson was an auditing firm, one of the “big five” accounting firms that audited most companies allowed to sell their stock to the public. Arthur Anderson was auditor to companies including Enron, Worldcom and Sunbeam, all of whom had massive fraud scandals concerning their accounting. Now, auditors play a special role in public companies. They are (nominally) hired by the board of directors to audit the company’s books, and ensure that they are in compliance with generally accepted accounting practices. The board works for the shareholders of a company, and exists to protect the shareholders, and ensure the company is well run.

The duties and responsibilities that auditors have have a special legal name, fiduciary, because of the legal role that these folks have in our system of shareholder capitalism.

Arthur Andersen ignored that duty, and actively hid their history with Enron, by shredding documents. That breach of trust is what destroyed the company, and for good reason. If you buy 100 shares of IBM, IBM isn’t going to let you come in and look at the books. You’re required to rely on the board to select auditors who will do that for you. And when the auditors fail, the consequences are severe. Companies, like Enron, Worldcom, and Sunbeam, can commit fraud because their auditors are failing to do the job they’re hired to do.

Now lets take a look at Riggs bank. To the best of my knowledge, no one is accusing Riggs of violating fiduciary duties. In fact, I can’t recall a bank breaching their fiduciary duties lately. What Riggs is accused of doing is failing to file forms under the BSA. Even if the BSA was good law, this would not be in the class of Andersen’s failings. BSA isn’t even good law.

I say that not (even) from a privacy perspective, but from the perspective of someone who tried to help customers implement it. When I was a consultant, I worked with a number of banks who were concerned about compliance. We sweated over what words in the law meant. There were some obvious cases: If someone was on the OFAC list of bad people, they shouldn’t be allowed to do things. But what was ‘suspicious’ behavior over the internet? What set of behaviors should cause us to file reports? There were no clear answers. The answers that we, like most banks, came to, was to toss customer privacy to the wind, and file forms often. And now, banks are concerned about compliance costs. These costs aren’t really paid by banks; they’re paid by bank customers in the form of higher fees and interest payments on loans.

There’s a way in which these bank regulations are like the drug war: The laws that Congress passes are ineffective, but all Congress can really do is pass laws, and so they pass more and more laws, imposing higher and higher costs, without ever really having any effect on terrorist finance or money laundering or drug dealers.

Riggs failed to comply with the law, and is paying a high cost. But if they had complied, spirit and letter, would the world be a better place? I don’t think so. And in that, they are very, very different from Arthur Andersen.

Small Bits of Chaos: Taxes, Orientation, Liberty, Fraudulent Licenses

Scrivner writes about the perverse nature of the AMT.


Chuck Spinney at D-N-I asks “Is America Inside Its Own OODA Loop?” The article contains some very clear writing on the meaning of orientation, and applies that idea:

He showed why the most dangerous internal state of an OODA loop occurs when the Orientation process becomes so powerful that it force fits the organism’s observations into fitting a preconceived template, even when those observations threaten the relevance of that template.


Europhobia has a great rant on the UK’s approach to liberty:

Yep, having been told by the Law Lords that the detention without trial of foreign terror suspects is illegal, Clarke has interpreted their ruling in such a way as to justify the adoption of a truly wonderfully Stalinist policy. Because, hey – what the Law Lords were obviously objecting to most of all was the discrimination, right? So if you end the discrimination it’ll all be fine!

Too bad his trackbacks are broken. ( trackback:ping="http://haloscan.com/tb/nosemonkey/<$BlogItemNumber$>)


Finally, Newsday has a long article on fraudulent issue of drivers licenses. (My thoughts are in my talk “Identity and Economics: Terrorism and Privacy.” Short form: As long as there’s a huge market demand for ID cards, and most of the people getting them are chasing the American dream, the market will connect buyers and sellers. If we want ID cards to be resilient, we need to reduce the demand for them.

Ben Rothke on Best Practices

Best practices look at what everyone else is doing, crunch numbers—and come up with what everyone else is doing. Using the same method, one would conclude that best practices for nutrition mandates a diet high in fat, cholesterol and sugar, with the average male being 35 pounds overweight.

Writes Ben Rothke in a short, incisive article for eweek. Go read it now.

Towards an Economic Analysis of Disclosure

In comments on a my post yesterday, “I Am So A Dinosaur“, Ian asks “Has anyone modelled in economics terms why disclosure is better than the alternate(s) ?” I believe that the answer is no, and so will give it a whack. The costs I see associated with a vulnerability discovery and disclosure, in chronological order are:

  1. The cost borne by a researcher who finds a vulnerability. This may be the time of a student, or it could be a fiscal cost borne by a company like NGS or eEye. Laws such as DMCA drive these costs up greatly. There is a subset of this cost, which is that good disclosure costs the reporter more. Good disclosure here includes testing on a variety of platforms, figuring out workarounds, and documenting the attack thoroughly.
  2. The set of costs incurred by the software maintainers. Every time a vulnerability is discovered, someone needs to evaluate it, and decide if it is worth the expense of writing, testing, and distributing a patch.
  3. Costs distributed amongst a great many users include learning about the vulnerability, and perhaps the associated patch, deciding if it matters to them, testing it, rating the urgency of the patch versus the business risks associated with a change to operational system. [If the users don't make that investment, or make poor decisions, there's a cost of being broken into, and recovery from the problem. (Thanks, Eric!)] Highly skilled end users may want to test a vulnerability in their environment. Full disclosure helps this testing. Good disclosure from the researcher also helps hold down costs here. Since there are lots of users of most such software, savings multiply greatly.
  4. Costs distributed amongst a smaller group of security software authors include understanding the vulnerability, building or getting exploit code, and adding functionality to their products to “handle” the vulnerability, either scanning for it, or detecting the attack signature. Where these vendors have to write their own exploit code, they will be slower to get their tool to customers. These costs are sometimes lower for vendors of closed source toolsets who can encode the information in a binary, and thus get it under NDA.
  5. Costs to one or more attackers to learn about the vulnerability; decide if they want to use it in an attack; code or improve the code for the attack; deploy the attack.
  6. Costs to academic researchers are separated here because academics are less time sensitive than security vendors. I can invent and test a tool to block buffer overflows with a 30 day old exploit as well as with completely fresh exploits. Academic researchers need high quality exploit code, but they don’t need it quickly.

I think that all responsible disclosure policies attempt to balance these costs. Some attackers don’t disclose at all; they invest in finding and using exploits, and hope that they have a long shelf life. (I started to say malicious attackers, but both government researchers and criminals fail to disclose.)

Ideally, we’d drive up attacker costs while holding down all of end-user, security vendor, and academic costs. (One of my issues with the OIS guidelines is that they give too little to the academic world. They could easily have said ‘responsible disclosure ends 90 days after a patch release with the release of exploit code and test cases.’)

So, Ian, I hope you’re happy–you’ve distracted me from the stock market question.

[Update: Reader Chris Walsh points to a paper, "Economic Analysis of the Market for Software Vulnerability Disclosure," which takes these ideas and does the next step of economic analysis, as well as a presentation that some of the authors gave at Econ & Infosec.]

I Am So A Dinosaur…

…and I was one before it was cool. Crit Jarvis responds to my comment that my views on disclosure have ossified by claiming that I’m evolving. The trouble is, I have documented proof it’s not true. From my homepage:

Apparent Weaknesses in the Security
Dynamics Client Server Protocol.
This paper was presented at the
DIMACS workshop on href="http://dimacs.rutgers.edu/Workshops/Threats/program.html">Network
Threats, and describes a substantial weakness in the Security
Dynamics client server model, which was apparently fixed in versions
of the software later than the ones I was working with. Security Dynamics responded to my work before publication. I’m very pleased that they will be publishing their protocols in the future. The postscript file submitted to DIMACS is available, as is an html version, but the html version is missing two diagrams.

The DIMACS workshop was Dec 4-6, 1996. I spoke to some folks about the flaw at Crypto, in Santa Barbara that summer, and they encouraged me to publish. I spent a while talking to a lawyer about the issues, concerned that I might be sued, and pulled source code for the F2 hash from the paper. I contacted Security Dynamics only after the paper had gone to press, to make it harder for them to pressure me to pull it. It turned out that John Brainard and Vin McLellan were utter gentlemen in dealing with me, and SDI never brandished a threatening word. But in the word of vulnerability disclosure back then, I didn’t think I was being unreasonably fearful.

The landscape is somewhat different today (although Guillermito* would doubtless beg to differ, as would Niels Ferguson ). Companies, by and large, seem to be responding better to security reports. (I know someone whose bug report sat at Sun and CERT for a full two years in the early 90s, despite rampant evidence of attacker use.) But my position is about the same. Over time, disclosure is better than trying to sweep vulnerabilities under the rug. We should tweak to minimize the current pain that disclosure entails.

*(via Freedom To Tinker Clips)

Patterns of Conflict, Easier on the Eyes

I’ve been posting a fair bit about Boyd. Boyd’s wrote very little. Most of his communication was in the form of briefs. At least two of you have publicly admitted to getting the slides, and, if you’re like me, struggled with the form of the presentation: A scan of a typed, hand-annotated presentation book. There’s a new Powerpoint version available, edited by Chet Richards and Chuck Spinney, and produced and designed by Ginger Richards. It’s far easier on the eyes. There are a few places where the presentation is unfortunately dense 8 point type, but that’s the breakdown in what Boyd wrote.

More on Do Security Breaches Matter?

In responding to a question I asked yesterday, Ian Grigg writes:

In this case, I think the market is responding to the unknown. In other words, fear. It has long been observed that once a cost is understood, it becomes factored in, and I guess that’s what is happening with DDOS and defacements/viruses/worms. But large scale breaches of confidentiality are a new thing. Previously buried, they are now surfaced, and are new and scary to the market.

I like the idea that these are new and scary. Unfortunately, we can’t tell if this matches the data. In a 2004 paper, “Effect of Internet Security Breach Announcements on Market Value of Breached Firms and Internet Security Developers”, Cavusoglu argues that the market cap drop is 2.1% within 2 days. (Unfortunately no longer online, but mentioned in his paper the Camp-Lewis book.) So if Campbell et al found a 5% drop, then is the market is punishing companies more? Who did their research first? What was the time period studied? We can’t tell without both papers being available.

Otherwise I have a problem with a 5% drop in value. How is it that confidentiality is worth 5% of a company? If that were the case, companies like DigiCash and Freedom [Zero-Knowledge?] would have scored big time, but we know they didn’t. Confidentiality just isn’t worth that much, ITMO (in the market’s opinion).

I don’t agree with this analysis. I’ve argued elsewhere (Will People Ever Pay For Privacy?) that privacy is a hard product to sell. Confidentiality could be worth 5% of a company in a lawsuit, especially if the breach causes clear harm (as in the Amy Boyer case. I’m hard pressed to argue that the market’s response is accurate and generalizable, but I expect tort law will evolve rapidly here, and in the absence of certainty, the market will extract a risk premium.

Small Bits of Chaos: Blind overflows, National ID, and Looney Tunes

SecurityFocus has a new article on blind buffer overflows. I’m glad these techniques are being discussed in the open, rather than in secret.


Julian Sanchez has the perfect comment on Congressman Dreier’s new national ID plan, at Hit & Run.


And finally, don’t visit this Looney Tunes site if you’re busy. (Via Steven Horowitz at Liberty and Power).