CERT, Tor, and Disclosure Coordination

There’s been a lot said in security circles about a talk on Tor being pulled from Blackhat. (Tor’s comments are also worth noting.) While that story is interesting, I think the bigger story is the lack of infrastructure for disclosure coordination.

Coordinating information about vulnerabilities is a socially important function. Coordination makes it possible for software creators to create patches and distribute them so that those with the software can most easily protect themselves.

In fact, the function is so important that it was part of why CERT was founded to: “coordinate response to internet security incidents.” Now, incidents has always been more than just vulnerabilities, but vulnerability coordination was a big part of what CERT did.

The trouble is, it’s not a big part anymore. [See below for a clarification added August 21.] Now “The CERT Division works closely with the Department of Homeland Security (DHS) to meet mutually set goals in areas such as data collection and mining, statistics and trend analysis, computer and network security, incident management, insider threat, software assurance, and more.” (Same “about” link as before.)

This isn’t the first time that I’ve heard about an issue where CERT wasn’t able to coordinate disclosure. I want to be clear, I’m not critiquing CERT or their funders here. They’ve set priorities and strategies in a way that makes sense to them, and as far as I know, there’s been precious little pressure to have a vuln coordination function.

It’s time we as a security community talk about the infrastructure, not as a flamewar over coordination/responsibility/don’t blow your 0day, but rather, for those who would like to coordinate, how should they do so?

Heartbleed is an example of what can happen with an interesting vulnerability and incomplete coordination. (Thanks to David Mortman for pointing that out in reviewing a draft.) Systems administrators woke up Monday morning to incomplete information, a marketing campaign, and a slew of packages that hadn’t been updated.

Disclosure coordination is hard to do. There’s a lot of project management and cross-organizational collaboration. Doing that work requires a special mix of patience and urgency, along with an unusual mix of technical skill with diplomatic communication. Those requirements mean that the people who do the work are rare and expensive. What’s more, it’s helpful to have these people seated at a relatively neutral party. (In the age of governments flooding money into cyberwar, it’s not clear if there are any truly neutral parties available. Some disclosure coordination is managed by big companies with a stake in the issue, which is helpful, but it’s hard for researchers to predict and depend apon.) These issues are magnified because those who are great at vulnerability research rarely spend time to develop those skills, and so an intermediary is even more valuable.

Even setting that aside, is there anyone who’s stepping up to the plate to help researchers effectively coordinate and manage disclosure?

[Update: I had a good conversation with some people at CERT/CC, in which I learned that they still do coordination work, especially in cases where the vendor is dealing with an issue for the first time, the issue is multi-vendor, or where there’s a conflict between vendor and researcher. The best way to do that is via their vulnerability reporting form, but if things don’t work, you can also email cert@cert.org or call their hotline (the number is on their contact us page.]

[Update 2, Dec 2014: Allen Householder has a really interesting model of “Vulnerability Coordination and Concurrency Modeling” on the CERT/CC blog, which shows some of the complexity involved, and touches on topics above. Oh, and some really neat Petri-dish state models.]

Is iTunes 10.3.1 a security update?

Dear Apple,

In the software update, you tell us that we should see http://support.apple.com/kb/HT1222 for the security content of this update:

Itunes10 3 1

However, on visiting http://support.apple.com/kb/HT1222, and searching for “10.3”, the phrase doesn’t appear. Does that imply that there’s no security content? Does it mean there is security content but you’re not telling us about it?

Really, I don’t feel like thinking about the latest terms of service today if I don’t have to. I’d prefer not to get your latest features which let you sell more and bundle in your latest ideas about what a music player ought to do. But I’m scared. And so I’d like to ask: Is there security content in iTunes 10.3.1?

Who Watches the FUD Watcher?

In this week’s CSO Online, Bill Brenner writes about the recent breaks at Kaspersky Labs and F-Secure. You can tell his opinion from the title alone, “Security Vendor Breach Fallout Justified” in his ironically named “FUD watch” column.

Brenner watched the FUD as he spreads it. He moans histrionically,

When security is your company’s business, even the smallest breach is worthy of scorn. If you can’t keep the bad guys out of your own database, how can customers reasonably expect that you’ll keep theirs safe?

Oh, please. Spare us the gotcha. Let me toss something back at Brenner. In the quote above, he says, “theirs” but probably meant to say “them.” The antecedant of “theirs” is database, and Kaspersky isn’t strictly a database security company, but an anti-virus company. “Them” is a much better turn of phrase, and I hope what he meant to say. How can we possibly trust CSO Online as a supplier of security knowledge when they can’t even compose a simple paragraph? And how can we even trust your own tagline:

Senior Editor Bill Brenner scours the Internet in search of FUD – overhyped security threats that ultimately have little impact on a CSO’s daily routine. The goal: help security decision makers separate the hot air from genuine action items.

Why is FUD Watch creating the very sort FUD they claim to watch? Who watches the FUD watchers? I do, I suppose.

Is my criticism unfair and picayune? Yup.

People make mistakes, even Kaspersky and F-Seecure. And heck, even CSO Online. I forgive you.

Brenner came very close to writing the article that should have been written. If even the likes of Kaspersky and F-Secure fall victim to stupid things like SQL injection, what does that say about the state of web programming tools? How can mere mortals be safe if they can’t?

The drama about these breaks is FUD. It shows that no one is immune. It shows that merely being good at what you do isn’t good enough. It means that people need to test, verify, buy Adam’s book, read it, and act on it.

The correct lesson is not schadenfreude, but humility. There but for the grace of God, go all of us.

In the land of the blind..

land-of-the-blind.jpgPCI DSS Position on Patching May Be Unjustified:”

Verizon Business recently posted an excellent article on their blog about security patching. As someone who just read The New School of Information Security (an important book that all information security professionals should read), I thought it was refreshing to see someone take an evidence-based approach to information security controls.

First, thanks Jeff! Second, I was excited by the Verizon report precisely because of what’s now starting to happen. I wrote “Verizon has just catapulted themselves into position as a player who can shape security. That’s because of their willingness to provide data.” Jeff is now using that data to test the PCI standard, and finds that some of its best practices don’t make as much sense the authors of PCI-DSS might have thought.

That’s the good. Verizon gets credibility because Jeff relies on their numbers to make a point. And in this case, I think that Jeff is spot on.

I did want to address something else relating to patching in the Verizon report. Russ Cooper wrote in “Patching Conundrum” on the Verizon Security Blog:

To summarize the findings in our “Control Effectiveness Study”, companies who did a great job of patching (or AV updates) did not have statistically significant less hacking or malicious code experience than companies who said they did an average job of patching or AV updates.

The trouble with this is that the assessment of patching is done by

…[interviewing] the key person responsible for internal security (CSO) in just over 300 companies for which we had already established a multi-year data breach and malcode history. We asked the CSO to rate how well each of dozens of countermeasures were actually deployed in his or her enterprise on a 0 to 5 scale. A score of “zero” meant that the countermeasure was not in use. A score of “5″ meant that the countermeasure was deployed and managed “the best that the CSO could imagine it being deployed in any similar company in the world.” A score of “3″ represented what the CSO considered an average deployment of that particular countermeasure.

So let’s take two CSOs, analytical Alice and boastful Bob. Analytical Alice thinks that her patching program is pretty good. Her organization has strong inventory management, good change control, and rolls out patches well. She listens carefully, and most of her counterparts say similar things. So she gives herself a “3.” Boastful Bob, meanwhile, has exactly the same program in place, but thinks a lot about how hard he’s worked to get those things in place. He can’t imagine anyone having a better process ‘in the real world,’ and so gives himself a 5.

[Update 2: I want to clarify that I didn’t mean that Alice and Bob were unaware of their own state, but that they lack data about the state of many other organizations. Without that data, it’s hard for them to place themselves comparatively.]

This phenomenon doesn’t just impact CSOs. There’s fairly famous research entitled “Unskilled and Unaware of it,” or “Why the Unskilled Are Unaware:”

Five studies demonstrated that poor performers lack insight into their shortcomings even in real world settings and when given incentives to be accurate. An additional meta-analysis showed that it was lack of insight into their errors (and not mistaken assessments of their peers) that led to overly optimistic social comparison estimates among poor performers.

Now, the Verizon study could have overcome this by carefully defining what a 1-5 meant for patching. Did it? We don’t actually know. To be perfectly fair, there’s not enough information in the report to make a call on that. I hope that they’ll make that more clear in the future.

Candidly, though, I don’t want to get wrapped around the axle on this question. The Verizon study (as Jeff Lowder points out) gives us enough data to take on questions which have been opaque. That’s a huge step forward, and in the land of the blind, it’s impressive what a one-eyed man can accomplish. I’m hopeful that as they’ve opened up, we’ll have more and more data, more critiques of that data. It’s how science advances, and despite some mis-givings about the report, I’m really excited by what it allows us to see.

Photo: “In the land of the blind, the one eyed are king” by nandOOnline, and thanks to Arthur for finding it.

[Updated: cleaned up the transition between the halves of the post.]

The Difference Between Knowledge and Wisdom


If you haven’t heard about this, you need to. All Debian-based Linux systems, including Ubuntu, have a horrible problem in their crypto. This is so important that if you have a Debian-based system, stop reading this and go fix it, then come back to finish reading. In fact, unless you know you’re safe, I’d take a look at updating your system anyway.

The problem is that they “fixed” the random number generator so that it doesn’t generate random numbers, but a semi-fixed stream of pseudo-random bytes.

A friend of a friend is now working on generating the whole set of possible keys, and will release them to the world here. (Agree or not with this, but remember that the bad guys have them by now.)

Ben Laurie has written about it in gory detail here and here. If you want a summary, this problem comes about because the OpenSSL random number generator does some things that are unconventional, but not wrong. The unconventional coding was flagged by a code-analysis tool, and a Debian person removed it. That change made all randomness vanish from the random number generator.

Plenty of people have debated the whole thing. For example, there’s the debate that says the Debian developer was an idiot, adn the people who say that the folks who did unconventional things were idiots.

I think that this is the sort of expected failure that happens in complex systems. I am reminded of code optimizers that see that a programmer clears a variable and then doesn’t use it, so they optimize out the clearing, not realizing that that is erasing keys or passwords or whatever.

I’ll add in that what leapt out at me was that the unconventional coding had an excessively vague comment noting that the analysis tool wouldn’t like it. It would have been much better to have an over-the-top comment.

I was once notorious for a comment I had in some extremely hairy code that said something akin to:

This code is delicate. Don’t modify it unless you understand it. If you think you understand it, you don’t. I wrote it and I don’t understand it.

That’s what I meant by an over-the-top comment. I wanted the poor person who maintained my code to think three times. When you do something unconventional, you need to point out to the other developers in the ecosystem that you did what you did intentionally.

And for those of you who read the whole of this article before patching — shoo. Go. Install that update. Now.

Photo “Random # 15 MSH” by Saffanna.

Microsoft Has Trouble Programming the Intel Architecture


Microsoft Office 2008 for the Macintosh is out, and as there is in any software release from anyone there’s a lot of whining from people who don’t like change. (This is not a criticism of those people; I am often in their ranks.)

Most of the whining comes because Office 2008 does not include Visual Basic. In some respects, this is welcome change because Office never should have had Visual Basic. VBA is what enabled the Macro Virus. Furthermore, Office 2009 (for Windows) is not going to have VBA, either.

However, not shipping VBA in Office 2008 means that people who want to have cross-platorm documents that are pseudo-applications have to deal with it in 2008, not 2009. That’s worth complaining about.

The reason, according to El Reg is blink-inducing:

Microsoft argued that the technical problems involved in porting Visual Basic at the same time as revamping Mac Office to work on Apple’s Intel platform would have meant further delays.

I have demonstrated the absurdity of that argument in my headline. Please, I’m a technologist. I can imagine the real reasons. It was a pain in the butt; it would have required hiring another person or two; it seemed futile to port it when Office 2009 is going to get rid of it. I understand. Don’t insult my intelligence. Don’t lie to me.

The truth is that you didn’t want to, because it would suck. And what are we customers going to do, anyway? So that means you don’t have to do it because you don’t want to.

OpenOffice sucks. No, really, it does. I have co-workers that use it and watching them always brings a smile of schadenfreude to my lips. When trying to bend Word or PowerPoint to my will makes me want to put my fist through the screen, nothing makes me feel better faster than strolling into someone’s office and saying, “I dunno, maybe I ought to switch. How do you do XXXX in OpenOffice?” It’s cruel; it is the equivalent of seeking out someone with no feet because you have no shoes. But hey. I admit and argue the necessity of using Office, but I am Mordaxus, not Pangloss.

Pages is cute and nice for new work, but people don’t send me Pages documents, they send me Word documents. Keynote rocks — it got Al Gore both an Oscar and the Nobel Prize — but when someone says, “Would you look at this deck” it’s a ppt.

There will be those who are scrolling for the reply button to tell me that Pages and Keynote can import Office documents. They can. I still need Office, because they import Office document, not interoperate with them.

Longer work is another issue. Over the last couple of years, I’ve become a LaTeX expert again. The irony is that I stopped doing most of my work in LaTeX because Word 3 was better for so many things. Nonetheless, nothing is as drop-dead gorgeous as a TeX document.

This weekend, a friend who writes books recommended Scrivener to me as an alternative for long documents. Scrivener is more or less a project manager for large documents. I’m going through the tutorials, which are amusing. It reminds me in other ways of the wonderful Notebook by Circus Ponies.

Nonetheless, the friend who pointed me there uses Word.

This brings us back to the matter at hand. As painful as it is for Microsoft, they are a monopoly. Not using Office is not an option. Sure, I can screw around with beautifully designed, fun to use productivity managers, but you have to use Office. (Or LaTeX.)

The plus side of being a monopoly is that you are ubiquitous, and money doesn’t do anything as plebeian as grow on trees for you. The minus side is that when a tree falls in the forest on some power lines, you hear it, and you have to fix it!

Forget duty, let’s talk self-preservation. Microsoft, if you don’t want to go the way of Western Union, AT&T, IBM, Bessemer Steel, or The Railroads, you have to at least pretend you like us, your customers. Getting rid of VBA is a great idea. It was an abomination in the first place, breaking the data/code separation that security needs. But if you’re going to can it in 2009, you have to can it in 2009, not 2008. The result is that we’re going to get more hair-pulling for another year.

Apple’s Update Strategy is Risky


On Saturday I was going to a party at an apartment building. The buzzer wasn’t working, and I took out my shiny new iphone to call and get in. As I was dialing, a few young teenagers were coming out. They wanted to see the iPhone, and so I demo’d it in exchange for entry to the building. (Mmm, security.) As I was heading in, one of them turned back to me to say “Be careful! The updates are bricking those things!”

(Bricking is a term used to describe making your expensive electronics as useful as a brick by messing up the way the it starts up.)

I found that really interesting–they hadn’t ever touched the phone, but they’d heard about and remembered the risks of patching them and wanted to share.

More than interesting, I think there’s a tremendous danger for everyone who makes software. Allow me to explain my personal analysis here. My employer may well disagree, I haven’t discussed this with them.

This update isn’t simply about annoyances like low speakerphone volume. There are ten security issues being fixed as well. Some of these (bluetooth remote code execution, cross site scripting, Javascript not turning off and javascript having access to https data) are clearly worth fixing. Unfortunately, Apple has decided to offer only a single, rolled-up update which combines all of these issues with other changes. Those other changes may brick your phone.

This fear of brickage is a particular and intense form of fear of instability. This was a major theme of a paper I wrote in 2002 with Steve Beattie, Crispan Cowan and others, “Timing the Application of Security Patches for Optimal Uptime.” In that paper, we discuss how to balance the security expert’s fear of an attack with the operations expert’s desire for system stability. Our suggestion was to measure the odds of a patch induced failure with the odds of an attack. We used vendor patch recall as a indicator of patch quality. Apple has denied us that, by saying “If the damage was due to use of an unauthorized software application, voiding their warranty, they should purchase a new iPhone.”

This fear will leave many iPhone owners reluctant to patch. In fact, as I write this, the unbricking software rolls your phone back to version 1.0.2, effectively unbricking and unpatching it at the same time.

People remember important or emotional events more than they remember routine ones. Quick, what did you have for dinner on October 1 of last year? How about your birthday? Similarly, users remember patch failures far more than they remember patch success.

And so what Apple is doing has an important side effect: creating a fear of their patches. That fear will last long after a reasonable resolution of this incident. It will be felt and remembered not only by those who had bricked phones, but also by those kids who let me into their building.

If Apple wants to shoot themselves in the foot, that’s their business. What they’re doing is splattering other people.

There are two important lessaons for anyone developing software:

  • Software has bugs, and some bugs require patches. Think about update strategies early on.
  • It would serve Apple extremely well to find ways to let their iPhone customers extend their devices in supported ways. More generally, when you have a hot product, let your customers extend it. Embrace the chaos that emerges.

Speaking of chaos that emerges, there was too much to fit into the main post, so a little more on troubles with updating, and the inexorable nature of corporate DNA, after the cut.

Image: ChieuKeung’s lego iPhone.

[Update: For all of you saying, “Don’t hack, it’s ok” in the comments, I have two responses:

  • First, you’re missing a major point of this post, which is tangentially about Apple, and all about the psychology of patching.
  • Second, Engadget says you’re wrong (in a long post):

    In an informal and totally unscientific poll here on Engadget, the number of iPhone users who had never hacked their device but wound up bricked was very similar to the number of users who did hack and brick their device.


Continue reading

Lrn 2 uZ ‘sed’, n00bz

The iTunes Plus music store opened up today, which sells non-DRM, 256kbit AAC recordings. In case you have missed the financial details, the new tracks are $1.29 per, but albums are still $9.99. You can upgrade your old tracks to high-quality, non-DRM, but you have to do it en masse and it’s only for the ones presently offered.

In a delightful bit of evil, you can also set up iTunes to display iTunes Plus first. This effectively gives EMI the endcap.

Ars Technica reports that these tracks, however, contain your account name and email address in them in their article, “Apple hides account info in DRM-free music, too.” They say,

With great power comes great responsibility, and apparently with DRM-free music comes files embedded with identifying information. Such is the situation with Apple’s new DRM-free music: songs sold without DRM still have a user’s full name and account e-mail embedded in them, which means that dropping that new DRM-free song on your favorite P2P network could come back to bite you.

I have verified that this is correct. Apple has encoded both the account name and email address using a steganographic coding mechanism standardized in ISO 10646. Colloquially, a subset of this is often called “ASCII.”

I have also verified, however, that you can patch out this information using a variety of tools. Despite my snarky subject line, I did not use sed, I used a text editor. I happened to use one that Doesn’t Suck, but I’m sure it will work with vi or emacs, or even Notepad. I give no further instructions, though, as it’s easy to botch this if you’re not well versed in the technical arts.

As I’ve noted in the past, they aren’t the only one to watermark the files. Emusic does this as well, but with a more obscure scheme. It is possible that there is some other scheme that takes more wit than typing command-F, which is all I did. It is also possible that there are side effects; all I did was play the modified file all the way through and check the info screen, which I show below.

One last bit of advice — if you’re going to put music files up a P2P network, you cannot be paranoid. They are out to get you. It would be folly to take any music you bought from any service and serve it up.


When Security Collides With Engineering (Responsible Disclosure Redux)

Stefan Esser announced earlier this week that he was retiring from security@php.net citing irreconcilable differences with the PHP group on how to respond to security issues within PHP.
Of particular interest is that he will be making changes to how he handles security advisories for PHP (emphasis mine):

For the ordinary PHP user this means that I will no longer hide the slow response time to security holes in my advisories. It will also mean that some of my advisories will come without patches available, because the PHP Security Response Team refused to fix them for months. It will also mean that there will be a lot more advisories about security holes in PHP.

Since Stefan has locked out commenting his post, I’ll ask here:
Stefan, are you planning on providing workarounds for the advisories that don’t yet have patches? How are you planning on balancing the need for users to know versus broader exposure of a weakness? While I fully support your desire for full disclosure, I’m curious, what is too long? Where do you draw the line now that you’ve stepped further away from the project?
[Update: And more questions after talking to Adam. Why is PHP unable to unwilling to do security to Stefan’s standards? I’d love to hear both sides of this story…
Also what does this mean for PHP users? How bad off are they anyways?]
[Image is from Stefan’s Suhosin Hardened PHP Project.]

“Faux” Disclosure

I wasn’t going to join the debate on relative merits of Dave Maynor/Johnny Cache’s disclosure of vulnerabilities in device drivers at Black Hat 2006, but Bruce Schneier’s post calling it Faux Disclosure, has annoyed me enough that I feel obliged to comment now. In particular he says:

Full disclosure is the only thing that forces vendors to fix security problems. The further we move away from full disclosure, the less incentive vendors have to fix problems and the more at-risk we all are.

I think Bruce is missing a vital thought here that being, it is the threat of full disclosure and the effect that that disclosure will have on their customers that forces vendors to fix problems. Full disclosure without a remedy, when a vendor is working in a timely fashion to resolve the issue does nothing but hurt the end user. The fact of the matter is that given that patches were not yet available from the vendor, that it would have been incredibly irresponsible of Maynor and Cache to disclose the exact details of the vulnerability.
That’s my take on it at least.
Clearly, the issue of when and how much to disclose is still a hugely open topic.
I know several of our readers were at Blackhat and at least one participated on the Vulnerability Disclosure Panel, what did you think of what was said there? Has your opinion changed in light of the disclosure at Blackhat of yet another Cisco vulnerability?
[Edit: Fixed broken link. Also see Brian Kreb’s interview with David Maynor]

Your Apple-Fu Is Impressive!

patched-mac.jpgYesterday, DaveG posted “When OSX Worms Attack” Its some good analysis of the three Apple Worms:

Safari/Mail Vulnerability: Far more interesting. This is a serious vulnerability that needs to be fixed. If you are Mac user, I would at the very least uncheck ‘Open Safe Files’ in Safari preferences. I don’t understand why Apple isn’t advising people on this better. This vulnerability is public, trivial to exploit, and we are at the 7 day mark.

Just a bit over a day later, Apple ships APPLE-SA-2006-03-01, with about 21 CVE marked vulns, and two extra “security enhancements.” Some of it is confusing, for example, “Authenticated users may cause an rsync server to crash or execute arbitrary code” I understand neither the ordering or the lack of specificity.

“Crash” is what happens when I write exploit. “Execute arbitrary code” happens when DaveG writes exploits. So what’s happening? Is it “there’s an overflow, and we’re not sure if you can turn it into run code, and we fixed it?” That’s ok. No, I take it back. That’s great! I don’t want to have to prove that I can execute an overflow to see it fixed. Preemptive fixing is a great plan. If that’s what’s happening, please keep it up, and then please brag about it.

(Image stolen from the F-Secure blog.)

Updating Windows Mobile Phones

old-phone.jpgNothing we ever create, especially software, is ever perfect. One of the banes of professional systems administrators is the software update process, and the risk trade-offs it entails. Patch with a bad patch and you can crash a system; fail to patch soon enough, and you may fall to a known attack vector. The mobile phone companies have taken an innovative approach to the problem: They ignore it. It makes perfect sense to them. If a hacker bricks your phone, they’ll sell you a new one, with a new two-year lock-in.

I suspect this is frustrating to the authors of phone software, who have their own brand to consider:

Artak Abrahamyan , Technical Support Specialist from i-Mate, responded to my email requesting when they’d be releasing MSFP [Microsoft Security & Feature Pack] updates …Although I requested information about which devices would be receiving updates, he didn’t provide that information. My hope is that it will be for all of the Windows Mobile 5.0 devices that i-Mate has released so far. Looks like we’ll see this in early March. (From “Smartphone thoughts“)

This ‘innovative’ approach to preventing customers from getting at a patch has many upsides in fewer software variants running, higher assurance for the telco, and more time for worm writers to hone their attack code.

I am reasonably confident that my phone has security issues. (I’m not slamming Microsoft here–I’m reasonably confident that all software I’ve ever touched has security issues.) However, I have a phone running Windows Mobile, and so I’m aware of my failure to patch. I’d like to see a mobileupdate.microsoft.com that allowed me to bypass this telco nonsense and get patches.

[Update: While I’m talking about telco security, let me mention the idiots at Cingular, who insist that caller-id is a good way to authenticate me to my voice mail, and refuse to give me a way to add a password. If you don’t understand why that’s a bad idea, “this google search” may enlighten you. Take a look at those ads.]
(Phone box photo by Alex Segre, on Flickr.)