Rebuilding the internet?

Once apon a time, I was uunet!harvard!bwnmr4!adam. Oh, harvard was probably enough, it was a pretty well known host in the uucp network which carried our email before snmp. I was also harvard!bwnmr4!postmaster which meant that at the end of an era, I moved the lab from copied hosts files to dns, when I became adam@bwnmr4.harvard…wow, there’s still cname for that host. But I digress.


Really, I wanted to talk about a report, passed on by Steven Johnson and Gunnar Peterson, that Vint Cerf said that if he were re-designing the internet, he’d add more authentication.

And really, while I respect Vint a tremendous amount, I’m forced to wonder: Whatchyou talkin’ about Vint?


I hate going off based on a report on Twitter, but I don’t know what the heck a guy that smart could have meant. I mean, he knows that back in the day, people like me could and did give internet accounts to (1) anyone our boss said to and (2) anyone else who wanted them some of this internet stuff and wouldn’t get us in too much trouble. (Hi S! Hi C!) So when he says “more authentication” does that mean inserting “uunet!harvard!bwnmr4!adam” in an IP header? Ensuring your fingerd was patched after Mr. Morris played his little stunt?


But more to the point, authentication is a cost. Setting up and managing authentication information isn’t easy, and even if it were, it certainly isn’t free. Even more expensive than managing the authentication information would be figuring out how to do it. The packet interconnect paper (“A Protocol for Packet Network Intercommunication,” Vint Cerf and Robert Kahn) was published in 1974, and says “These associations need not involve the transmission of data prior to their formation and indeed two associates need not be able to determine that they are associates until they attempt to communicate.” That was before DES (1975), before Diffie-Hellman (1976), Needham-Schroeder (1978) or RSA. I can’t see how to maintain that principle with the technology available at the time.

When setting up a new technology, low cost of entry was a competitive advantage. Doing authentication well is tremendously expensive. I might go so far as to argue that we don’t know how fantastically expensive it is, because we so rarely do it well.

Not getting hung up in easy problems like prioritization or hard ones like authentication, but simply moving packets was what made the internet work. Allowing new associations to be formed, ad-hoc, made for cheap interconnections.

So I remain confused by what he could have meant.

[Update: Vint was kind enough to respond in the comments that he meant the internet of today.]

Color on Chrome OS

New things resemble old things at first. Moreover, people interpret new things in terms of old things. Such it is with the new Google Chrome OS. Very little I’ve seen on it seems to understand it.

The main stream of commentary is comparisons to Windows and how this means that Google is in the OS business, and so on. This is also the stream that gets it the most wrong.

It’s just another Linux distribution, guys. It’s not like this is a new OS. It’s new packaging of existing software, with very little or even no new software. I have about ten smart friends who could do this in their sleep. Admittedly, a handful of those are actually working on the Chrome OS, so that somewhat weakens my comment. Nonetheless, you probably know someone who could do it, is doing it, or you’re one of the people who could do it.

Moreover, Chrome OS isn’t an OS in the way you think about it. Google isn’t going to provide any feature on Chrome OS that they aren’t going to provide on Windows, Mac OS, Ubuntu, Android, Windows Mobile, iPhone, Palm Pre, Blackberry, and so on.

Consider the differences between the business model of Microsoft and that of Google. Microsoft believes that it should be the only software company there is. Its actual historic mission statement says that its mission is to push its software everywhere. Its mission does not include “to the exclusion of everyone else,” it merely often acts that way. Google’s mission is to have you use its services that provide information.

To phrase this another way, Microsoft gets paid when you buy Windows or Office or an Xbox, etc. Their being paid does not require that you not run Mac OS, or Lotus, or PlayStation, but that helps. Google gets paid when you click on certain links. It doesn’t matter how you clicked on that link, all that matters is that you click. Google facilitates that clicking through its information business facilitated its software and services, but it’s those clicks that get them paid.

The key difference is this: Microsoft is helped by narrowing your choices, and Google is helped by broadening them. It doesn’t help Microsoft for you to do a mashup that includes their software as that means less Microsoft Everywhere, but it helps Google if you include a map in your mashup as there’s a chance a paid link will get clicked (no matter how small, the chance is zero if you don’t).

I don’t know whether it’s cause or effect but Microsoft really can’t stand to see someone else be successful. It’s a zero-ish sum company in product and outlook. Someone else’s success vaguely means that they’re doing something non-Microsoft. Google, in contrast, is helped by other people doing stuff, so long as they use Google’s services too.

If I shop for a new camera, for example, the odds are that Google will profit even if I buy it on eBay and pay for it with PayPal. Or if I buy it from B&H, Amazon, etc. So long as I am using Google to gather information, Google makes money.

Let me give another more pointed example. Suppose you want to get a new smartphone. Apple wins only if I get an iPhone. RIM wins when I get a BlackBerry. Palm wins if I get a Pre or a Treo. Nokia wins a little if I get any Symbian phone (most of which are Nokias, but a few aren’t). Microsoft wins if I get any Windows Mobile phone, of which there are many. But Google wins not only if I get an Android phone, but also if I get an iPhone (because the built-in Maps application uses Google), or if I install Google Maps on anything. One could even argue that it wins more if I get a non-Android phone and use their apps, because the margins are higher on the income.

This openness as a business model is why Microsoft created Bing. Partially it is because Microsoft can’t stand to see Google be successful, but also because Microsoft envies the way Google can win even when it loses, and who wouldn’t?

Interestingly, Bing is pretty good, too. One can complain, but one can always complain. Credible people give higher marks to Bing than Google, even. This puts Microsoft in the interesting position of being where Apple traditionally is with them. They’re going to learn that you can’t take customers from someone else just by being better.

But this is the whole reason for Chrome OS. Chrome OS isn’t going to make any money for Google. But it does let Google shoot at Microsoft where they live. When (not if, when) Chrome OS is an option on netbooks, it will cost Microsoft. Either directly, because someone picks Chrome OS over Windows, or indirectly because Microsoft is going to have to compete with free. The netbook manufacturers are going to be only too happy to use Chrome as a club against Microsoft to get better pricing on Windows. The winners on that are not going to be Google, it’s going to be the people who make and buy netbooks, especially the ones who get Windows. The existence of Chrome OS will save money for the people who buy Windows.

That’s gotta hurt, if you’re Microsoft.

This is the way to look at Chrome OS. It’s Google’s statement that if Microsoft treads into Google’s yard, Google will tread back, and will do so in a way that does not so much help Google, but hurts Microsoft. It is a counterattack against Microsoft’s core business model that is also a judo move; it uses the weight of Microsoft against it. As Microsoft moves to compete against Google’s services by making a cloud version of Office, Google moves to cut at the base. When (not if) there are customers who use Microsoft apps on Google’s OS, Microsoft is cut twice by the very forces that make Google win when you use a Google service on Windows.

(Also, if you’re Microsoft you could argue that Google has been stepping on their toes with Google Docs, GMail, etc.)

Someday someone’s going to give Ballmer an aneurysm, and it might be Chrome.

Building Security In, Maturely

While I was running around between the Berkeley Data Breaches conference and SOURCE Boston, Gary McGraw and Brian Chess were releasing the Building Security In Maturity Model.


Lots has been said, so I’d just like to quote one little bit:

One could build a maturity model for software security theoretically (by pondering what organizations should do) or one could build a maturity model by understanding what a set of distinct organizations have already done successfully. The latter approach is both scientific and grounded in the real world, and is the one we followed.

It’s long, but an easy and worthwhile read if you’re thinking of putting together or improving your software security practice.

Incidentally, my boss also commented on our work blog “Building Security In Maturity Model on the SDL Blog.”

SDL Threat Modeling Tool 3.1.4 ships!

On my work blog, I wrote:

We’re pleased to announce version 3.1.4 of the SDL Threat Modeling Tool. A big thanks to all our beta testers who reported issues in the forum!

In this release, we fixed many bugs, learned that we needed a little more flexibility in how we handled bug tracking systems (we’ve added an “issue type” at the bug tracking system level) and updated the template format. You can read more about the tool at the Microsoft SDL Threat Modeling Tool page, or just download 3.1.4.

Unfortunately, we have no effective mitigation for the threat of bad π jokes.

I’m really excited about this release. This is solid software that you can use to analyze all sorts of designs.

Let’s Fix Paste!

copy-paste.jpg

Okay, this is a rant.

Cut and paste is broken in most apps today. More specifically, it is paste that is broken. There are two choices in just about every application: “Paste” and “Paste correctly.” Sometimes the latter one is labeled “Paste and Match Style” (Apple) and sometimes “Paste Special” (Microsoft).

However, they have it backwards. Usually, what you want to do is the latter one, which is why I called it “paste correctly.” It is the exception that you want to preserve the fonts, formatting etc. Usually, you want to just paste the damned text in.

I mean, Jesus Hussein Christ, how hard is it to understand that when I go to a web page and copy something and then paste it into my document that I want to use MY fonts, formatting, color, and so on? Even if I do want to preserve those, I ESPECIALLY do not want you to leave my cursor sitting at the end of the paste switched out of whatever my setting I’m using. In the rare occasion that I want paste as it is done today, the keys I type are:

modifier-V              ! Paste (modifier is either (ironically) command or control)
start typing            ! Continue on my merry way
modifier-Z              ! Oh, crap, I'm no longer in my font,
modifier-Z              ! I'm in Web2.0Nerd Grotesque 10 light-grey
! undo the typing and the paste
space, back-arrow       ! Get some room
modifier-V              ! Paste
forward-arrow           ! Get back to my formatting
(delete)                ! Optionally delete the space
start typing again      ! Now where was I? Oh, yeah....

Note the extra flourish at the end because pasting is so helpful.

The usual sequence I type is:

modifier-V              ! Paste
modifier-Z              ! %$#@*!
search Edit menu        ! Gawd, where is it, what do they call it?
select Paste Correctly  ! Oh, there
start typing again      ! Now where was I? Oh, yeah....

This is much simpler, but has its own headaches. First of all, Microsoft binds their “Paste Special” to control-alt-V and brings up a modal dialog because there are lots of options you could conceivably want, and just wanting to paste the %$#@&* text is so, so special. Apple (whose devos must long for the Knight keyboard) binds it to command-option-shift-V, but at least doesn’t make me deal with Clippy’s dumber cousin. They put “Paste Style” on command-option-V, which pastes into place only the formatting. Oh, yeah, like I do that so often I need a keyboard shortcut.

The upshot is that the user experience here is so bad that the stupid blog editor I’m using here that actually makes me type in my own <p> tags is a more predictable editing experience. I can actually achieve flow while I’m writing.

Most tellingly, the most even, consistent, out-of-my way editing experience is getting to be LaTeX! Yeah, I have to type accents by hand, but at least I don’t lose my train of thought every time I paste.

The solution is simple. Make modifier-V be paste. Just plain old paste. Put paste-with-frosting on control-meta-cokebottle-V and give it a helpful dialog box. Please?

Photo by adam.coulombe.

Gary McGraw and Steve Lipner

Gary McGraw has a new podcast, “Reality Check” about software security practitioners. The first episode features Steve Lipner. It’s some good insight into how Microsoft is approaching software security.

I’d say more, but as Steve says two or three good things about my threat modeling tool, you might think it some form of conspiracy.

You should go listen.

Cryptol Language for Cryptography

Galois has announced “

Cryptol is a domain specific language for the design, implementation and verification of cryptographic algorithms, developed over the past decade by Galois for the United States National Security Agency. It has been used successfully in a number of projects, and is also in use at Rockwell Collins, Inc.


Cryptol allows a cryptographer to:

  • Create a reference specification and associated formal model.
  • Quickly refine the specification, in Cryptol, to one or more implementations, trading off space, time, and other performance metrics.
  • Compile the implementation for multiple targets, including: C/C++, Haskell, and VHDL/Verilog.
  • Equivalence check an implementation against the reference specification, including implementations not produced by Cryptol.

The trial version & docs are here.

First, I think this is really cool. I like domain specific languages, and crypto is hard. I really like equivalence checking between models and code. I had some questions, which I’m not yet able to answer, because the trial version doesn’t include the code generation bits, and in part because I’m trying to vacation a little.

My main question came from the manual, which First off the manual states: “Cryptol has a very flexible notion of the size of data.” (page number 11, section 2.5) I’d paste a longer quote, but the PDF doesn’t seem to encode spaces well. Which is ironic, because what I was interested in is “does the generated code defend against stack overflows well?” In light of the ability to “[trade] off space, time [etc]” I worry that there are a set of options which translate, transparently, into something bad in C.

I worry about this because as important as crypto is, cryptographers have a lot to consider as they design algorithms and systems. As Michael Howard pointed out, the Tokeneer system shipped with a library that may be from 2001, with 23 possible vulns. It was secure for a set of requirements, and if the requirements for Cryptol don’t contain “resist bad input,” then a lot of systems will be in trouble.

You versus SaaS: Who can secure your data?

In “Cloud Providers Are Better At Securing Your Data Than You Are…” Chris Hoff presents the idea that it’s foolish to think that a cloud computing provider is going to secure your data better. I think there’s some complex tradeoffs to be made. Since I sort of recoiled at the idea, let me start with the cons:

  1. The cloud vendor doesn’t understand your assets or your business. They may have an understanding of your data or your data classification. They may have a commitment to various SLAs, but they don’t have an understanding of what’s really an asset or what really matters to your business in the way you do. If you believe that IT doesn’t matter, then this doesn’t matter either.
  2. The cloud vendor doesn’t have to admit a problem. They can screw up and let your data out to the world, and they don’t have to tell you. They can sweep it under the rug.

In the middle, slightly con:
Its hard to evaluate security of a cloud vendor. Do you really think a SAS-70 is enough? (Would you tell your CEO, “we passed our SAS-70, nothing to worry about?”) This raises the transaction costs, but that may be balanced by the first pro:

  1. Cloud vendors involve a risk transfer for CIOs. A CIO can write a contract that generates some level of risk transfer for the organization, and more for the CIO. “Sorry, wasn’t me, the vendor failed to perform. I got a huge refund on cost of operations!
  2. Cloud vendors have economies of scale. Both in acquiring and operating the data center, a cloud vendor can bring in economies of scale of operating a few warehouses, rather than a few racks. They can create great operational software to keep costs down, and that software can include patch rollout and rollback, as well as tracking and managing changes, cutting overall MTTR (mean time to repair) for security and other failures.
  3. Cloud vendors could exploit signaling to overcome concerns that they’re mis-representing security state. If a Cloud vendor contracted to publish all their security tickets some interval after closing them, then a prospective customer could compare their security issues to that of the Cloud vendor. Such a promise would indicate confidence in their security stance, and over time, it would allow others to evaluate them.

That last is perhaps a radical view, and I’d like to remind everyone that I’m speaking for the President-Elect and his commitment to transparency, not for my employer.

Ephemeral Anniversary

Yesterday, Nov 17, was the sesquicentenary of the zero-date of the American Ephemeris. I meant to write, but got distracted. Astronomical ephemeris counts forward from this date.

That particular date was picked because it was (approximately) Julian Day 1,000,000, but given calendar shifts and all, one could argue for other zero dates as well. The important thing is to pick one.

There are some who think that this would be a better date to use as a zero-time computer timekeeping than what most of us use presently. It has the advantages that almost all of the Julian/Gregorian calendar skew comes after this (Russia being the major exception) and far enough back that nearly all time-and-date calculations you need to do with quick math can therefore be just adding and subtracting. And it has a nice scientific tie-in.

Other common zero-dates are 1 Jan 1904 (picked because if you pick this date, you can calculate all the way to 2100 assuming that leap years are every four years), and 1 Jan 1970 (picked because this was the last day that The Beatles recorded music in Abbey Road studios — actually, their last date was Jan 4, but close enough).