I’m Certifiably Wrong

So there’s some great discussion going on in the comments to “Certifiably Silly,” and I’d urge you to read them all. I wanted to respond to several, and I’ll start with Frank Hecker:

Could we take the cost issue out of this equation please … [Adam: I'm willing to set it aside, because the conversation has spiraled.]

The real questions as I see it are

1) Leaving aside the issue of cost, what are the pros and cons of introducing self-signed certificates into the current browser model of SSL?

2) If the advantages of introducing self-signed certificates into this model outweigh the disadvantages, what is the best approach (from a technical and user experience perspective) to introduce self-signed certificates into the current SSL model?

3) If there is a good technical/UX approach to introduce self-signed certificates into the current SSL model, what is the likelihood of such an approach being adopted on a universal basis (i.e., by all browser vendors), and how might this be made more likely?

I’d argue that these are the wrong questions: the real questions underlying our disagreement are probably “do certification authorities do what they’re purported to do, and (if we agree they don’t), what do we do about it?”

I think we do two things: One, we stop investing so much in them, and second, we investigate the heck out of the alternatives, including persistence and organizational CAs, including CAs run by groups like the American Bankers Association. These are both in direct contradiction of the CA business model, and so they’ve been stillborn.

I’m not going to claim that either will have better user experience than the current SSL model, and that’s a low bar.

So I’m wrong, the issue isn’t really self-signed certs, it’s the CA model.

There were another points raised, by both Frank and Andy Steingruebl about my bookmark model, which is that it breaks PayPal. There are two ways to read this model: One is “always use bookmarks.” the other is “never click on a link in email.” I intended the first, the second is unclear, given the prevalence of webmail. Perhaps we could address this by having merchants send transactions to PayPal, and then if I choose to login via a bookmark, I get a list of pending activity.

The final point that Andy raised is organizations with lots of web sites. A reasonable point, and one I’m not sure how to address. Part of how I’d address it is that most of us don’t see all of those brands. I would be happy to see some of the brand profusion go away, which of course, doesn’t mean it would happen. (I consulted for a bank for several years, I can’t keep track of all the brands that they present around my retirement accounts.) If I can’t keep track of them when they’re ‘not’ security critical, I surely can’t keep track when they are, and it is unreasonable to expect me to.

6 thoughts on “I’m Certifiably Wrong

  1. Adam,
    You won’t find me personally being a big proponent of the CA model. So, with that at least we can agree.
    The bigger persistent problem is user passwords and relying on users to make constant trust/security decisions. This is the underlying flaw. We’re relying on users to make trust/security decisions based on a user interface that wasn’t designed to make this easy, reliable, or secure.
    This is why things like cardspace is such a good idea, because it removes the burden from the user of making a complicated security decision.
    Hopefully my final $.02 on this :)

  2. “the real questions underlying our disagreement are probably ‘do certification authorities do what they’re purported to do, and (if we agree they don’t), what do we do about it?’”
    Well, that depends on what we think CAs are purported to do (or supposed to do). From my point of view CAs as currently constituted do two things: Via so-called “domain-validated” certificates they provide a way for sites to get basic SSL functionality working, with some protection against a certain class of DNS attacks. Via EV certificates they provide some level of independent confirmation of the corporate identity of a web site’s operator.
    Now, this seems an example of what Jim Burrows was talking about when he wrote “…before we tell [the masses] what it all means we do rather need to decide amongst ourselves…” Are the things I mentioned what we think CAs are purported to do, and we need to find another solution because they’re not doing a good job? Or is it that CAs are purported to do something else, like “enable trust” (whatever that means)? If the latter, I agree that we should be thinking more widely about what we’re really trying to accomplish, and how best to accomplish it.
    One final comment: I am not personally invested in the current commercial CA business model (as embodied by VeriSign, Go Daddy, Comodo, etc.), and I don’t think Mozilla considered generally is either. Our policies with regard to CAs are flexible enough to encompass nonprofit volunteer-run CAs like CAcert or industry-sponsored CAs like your proposed ABA-run CA. However it is fair to say that we do assume a model of CAs as third-party issuers of SSL certificates and endorsers of the information found within them, mainly because we’re trying to deal with the legacy SSL environment and with user expectations carried over from other browsers. To go beyond that model IMO requires some careful thinking about the consequences and approach.

  3. Do CA’s do what they’re purported to do?
    In some cases no, as demonstrated by the case where Verisign signed a certificate to someone masquerading as Microsoft (and later had to revoke it).
    I was slightly surprised by the more stringent certificate alarming system in Firefox 3. This has the side-effect of making users jump through several hoops to view a site using a self-signed cert.
    Perhaps we need more competition in the industry, to drive down prices. People would be happier to get CA-signed certificates if it didn’t cost them an arm and a leg. Being a default browser-listed CA seems like a license to print money at the moment.

  4. Andy,
    I think that there’s a general problem of “poor authentication.” Email, web sites and users are all poorly identified. Underlying that is a very fuzzy conception of what identity and authentication are. I might be able to identify you. We met at an event where you were nominally invited and authenticated. So there’s some sorts of trust chains there. Frank, on the other hand, I couldn’t identify. I don’t think we’ve ever met (although we have friends in common who could identify us to each other). For all I know, Frank is an AI working in the Googleplex. Maybe that’s ok, maybe it’s not. It’s determined by the situation. I think that trying to overlay a single infrastructure on top of “authentication” leads us into a maze of twisty little arguments, all alike.

  5. Frank,
    I think your last comment nails it: “To go beyond that model IMO requires some careful thinking about the consequences and approach.” I think what really set me off on this is all the energy being poured into an approach that I think is limited and limiting.
    I don’t think we need trust (which i think is a complex human emotional state which is hard for computer scientists to model), but rather various forms of reliance.
    That conversation, (if it’s the right one) is going to require careful thinking, and I’d like to see it happening more than fiddling around the edges of the CA model.

  6. Identity and security are a mess. I’ve been dealing with them professionally on and off for more than 30 years–ever since I detected an ex-employee masquerading as a current employee on a timesharing system I ran. And the more I think about them, the more I think we haven’t really understood them.
    In security, we seem to use what I’ll call a “motte and bailey” image. There is a natural or artificial boundary that separates our territory from the wild territory beyond, and along this boundary we have erected a wall or palisade, inside which we are safe from the bad guys “out there”. Except we conduct so much commerce through the many gates in the wall that are open so much of the time and life with the wall is so complex and the population so high that really, “in here” is no safer than “out there”.
    So we put walls around our houses, locks on our doors, then locks on our bedroom doors and… And we’re never safe. But we keep selling the model. You just need higher tech locks, and arrow slits or closed circuit TVs or … We do this in the physical world and we do it in the digital. And it never makes us 100% safe, and if we aren’t really really safe then we must be unsafe, in danger and danger must be avoidable.
    It’s like thinking that living beings could or should inhabit antiseptic environments. It’s just silly. You can’t live without eating biological material. you can’t reproduce without exchanging bodily fluids. Our bodies have understood this for millions of years. They have devised systems that are probabilistic, pattern recognizing, driven by rules and identifications that are “good enough”, that adapt, that are geared to the typical level of threat.
    Identity and authorization have long histories. Thinking of the authorization involved in the “identity based transaction” known as “makig a deal”, we can have

    • an oral agreement
    • an oral agreement sealed with a hand shake (spit optional)
    • a written agreement
    • a signed written agreement
    • a signed and witnessed written agreement
    • a signed, witnessed and notarized written agreement

    and so on. Which we choose depends upon well we know the other party, on trust and the reliability of identification, on the authority that we expect to eforce the agreement.
    For some crimes and other bad behavior we rely on mere identification to act as a deterrent. In a sufficiently tight culture merely being a member in good standing impacts survival. Betray the village or a fellow and you are outcast and starve. In a more civil and legalistic society fear of identification and prosecution before the law is a deterrent. But for the fraudster, and the terrorist identification provides little deterrence. The fraudster lives by “gaming the system”. The challenge of building or stealing identity is as much a reward as the gain it brings. The terrorist, once he has committed his act wants to be known. If he can conceal his intent until his surprise act, identification becomes just another tool.
    And yet we are trying to combat fraud aka “identity theft” and terror with complex, impersonal systems of precise technical tools and formal identification tokens. Somehow we have forgotten that “Your papers, please.” is not an expression that has made people feel safe. And for good reason.
    When Frank wonders if CAs should ‘do something else like “enable trust” (whatever that means)?’ he cuts to the heart, I think, of the unanswered question. What is trust? How do we establish it? Or, perhaps in Adam’s terminology, “what can we rely upon?” Whatever the terminology, the subject will be rich and complex, but we need to ask what the goals are. How much safety can be achieved? How do we maximize it? How can agreements and other transactions be reliable? How reliable? How can we maximize accountability?
    A better firewall, a memorable but unguessable password, an unforgeable credential, a border fence, a better tool is no good, unless we know what we are trying to accomplish, can quantify and measure or estimate it and evaluate the tool’s effectiveness.
    Why do I trust Firefox to trust Frank’s judgment that a CA is trustworthy enough that its certificates should be on the list of those accepted without question? I worry about that, myself. Is a formal hierarchy with roots blessed according to some organization’s policy better than a web of trust and co-signed self-issued certificates? Well, that sorta depends upon what we mean by “better”. Just what is it we’re trying to do? Fundamental questions first, much though the more technical ones fascinate me.
    Sorry for the long ramble through history and metaphor, but the philosophy major comes out in me when I find questions that 30+ years of consideration and grappling don’t answer.

Comments are closed.