Please check back, we may have sarcasm available in the future.
Emergent Chaos apologizes for any inconvenience.
Please check back, we may have sarcasm available in the future.
Emergent Chaos apologizes for any inconvenience.
In “U-Prove Minimal Disclosure availability,” Kim Cameron says:
This blog is about technology issues, problems, plans for the future, speculative possibilities, long term ideas – all things that should make any self-respecting product marketer with concrete goals and metrics run for the hills! But today, just for once, I’m going to pick up an actual Microsoft press release and lay it on you. The reason? Microsoft has just done something very special, and the fact that the announcement was a key part of the RSA Conference Keynote is itself important.
Further, Charney explained that identity solutions that provide more secure and private access to both on-site and cloud applications are key to enabling a safer, more trusted enterprise and Internet. As part of that effort, Microsoft today released a community technology preview of the U-Prove technology, which enables online providers to better protect privacy and enhance security through the minimal disclosure of information in online transactions. To encourage broad community evaluation and input, Microsoft announced it is providing core portions of the U-Prove intellectual property under the Open Specification Promise, as well as releasing open source software development kits in C# and Java editions. Charney encouraged the industry, developers and IT professionals to develop identity solutions that help protect individual privacy.
Kim then goes on to analyze the announcement, which is a heck of an important one.
Disclaimer: I work for Microsoft, and am friends with many of the people involved. I still think this is tremendously important.
Once apon a time, I was uunet!harvard!bwnmr4!adam. Oh, harvard was probably enough, it was a pretty well known host in the uucp network which carried our email before snmp. I was also harvard!bwnmr4!postmaster which meant that at the end of an era, I moved the lab from copied hosts files to dns, when I became email@example.com…wow, there’s still cname for that host. But I digress.
And really, while I respect Vint a tremendous amount, I’m forced to wonder: Whatchyou talkin’ about Vint?
I hate going off based on a report on Twitter, but I don’t know what the heck a guy that smart could have meant. I mean, he knows that back in the day, people like me could and did give internet accounts to (1) anyone our boss said to and (2) anyone else who wanted them some of this internet stuff and wouldn’t get us in too much trouble. (Hi S! Hi C!) So when he says “more authentication” does that mean inserting “uunet!harvard!bwnmr4!adam” in an IP header? Ensuring your fingerd was patched after Mr. Morris played his little stunt?
But more to the point, authentication is a cost. Setting up and managing authentication information isn’t easy, and even if it were, it certainly isn’t free. Even more expensive than managing the authentication information would be figuring out how to do it. The packet interconnect paper (“A Protocol for Packet Network Intercommunication,” Vint Cerf and Robert Kahn) was published in 1974, and says “These associations need not involve the transmission of data prior to their formation and indeed two associates need not be able to determine that they are associates until they attempt to communicate.” That was before DES (1975), before Diffie-Hellman (1976), Needham-Schroeder (1978) or RSA. I can’t see how to maintain that principle with the technology available at the time.
When setting up a new technology, low cost of entry was a competitive advantage. Doing authentication well is tremendously expensive. I might go so far as to argue that we don’t know how fantastically expensive it is, because we so rarely do it well.
Not getting hung up in easy problems like prioritization or hard ones like authentication, but simply moving packets was what made the internet work. Allowing new associations to be formed, ad-hoc, made for cheap interconnections.
So I remain confused by what he could have meant.
[Update: Vint was kind enough to respond in the comments that he meant the internet of today.]
New things resemble old things at first. Moreover, people interpret new things in terms of old things. Such it is with the new Google Chrome OS. Very little I’ve seen on it seems to understand it.
The main stream of commentary is comparisons to Windows and how this means that Google is in the OS business, and so on. This is also the stream that gets it the most wrong.
It’s just another Linux distribution, guys. It’s not like this is a new OS. It’s new packaging of existing software, with very little or even no new software. I have about ten smart friends who could do this in their sleep. Admittedly, a handful of those are actually working on the Chrome OS, so that somewhat weakens my comment. Nonetheless, you probably know someone who could do it, is doing it, or you’re one of the people who could do it.
Moreover, Chrome OS isn’t an OS in the way you think about it. Google isn’t going to provide any feature on Chrome OS that they aren’t going to provide on Windows, Mac OS, Ubuntu, Android, Windows Mobile, iPhone, Palm Pre, Blackberry, and so on.
Consider the differences between the business model of Microsoft and that of Google. Microsoft believes that it should be the only software company there is. Its actual historic mission statement says that its mission is to push its software everywhere. Its mission does not include “to the exclusion of everyone else,” it merely often acts that way. Google’s mission is to have you use its services that provide information.
To phrase this another way, Microsoft gets paid when you buy Windows or Office or an Xbox, etc. Their being paid does not require that you not run Mac OS, or Lotus, or PlayStation, but that helps. Google gets paid when you click on certain links. It doesn’t matter how you clicked on that link, all that matters is that you click. Google facilitates that clicking through its information business facilitated its software and services, but it’s those clicks that get them paid.
The key difference is this: Microsoft is helped by narrowing your choices, and Google is helped by broadening them. It doesn’t help Microsoft for you to do a mashup that includes their software as that means less Microsoft Everywhere, but it helps Google if you include a map in your mashup as there’s a chance a paid link will get clicked (no matter how small, the chance is zero if you don’t).
I don’t know whether it’s cause or effect but Microsoft really can’t stand to see someone else be successful. It’s a zero-ish sum company in product and outlook. Someone else’s success vaguely means that they’re doing something non-Microsoft. Google, in contrast, is helped by other people doing stuff, so long as they use Google’s services too.
If I shop for a new camera, for example, the odds are that Google will profit even if I buy it on eBay and pay for it with PayPal. Or if I buy it from B&H, Amazon, etc. So long as I am using Google to gather information, Google makes money.
Let me give another more pointed example. Suppose you want to get a new smartphone. Apple wins only if I get an iPhone. RIM wins when I get a BlackBerry. Palm wins if I get a Pre or a Treo. Nokia wins a little if I get any Symbian phone (most of which are Nokias, but a few aren’t). Microsoft wins if I get any Windows Mobile phone, of which there are many. But Google wins not only if I get an Android phone, but also if I get an iPhone (because the built-in Maps application uses Google), or if I install Google Maps on anything. One could even argue that it wins more if I get a non-Android phone and use their apps, because the margins are higher on the income.
This openness as a business model is why Microsoft created Bing. Partially it is because Microsoft can’t stand to see Google be successful, but also because Microsoft envies the way Google can win even when it loses, and who wouldn’t?
Interestingly, Bing is pretty good, too. One can complain, but one can always complain. Credible people give higher marks to Bing than Google, even. This puts Microsoft in the interesting position of being where Apple traditionally is with them. They’re going to learn that you can’t take customers from someone else just by being better.
But this is the whole reason for Chrome OS. Chrome OS isn’t going to make any money for Google. But it does let Google shoot at Microsoft where they live. When (not if, when) Chrome OS is an option on netbooks, it will cost Microsoft. Either directly, because someone picks Chrome OS over Windows, or indirectly because Microsoft is going to have to compete with free. The netbook manufacturers are going to be only too happy to use Chrome as a club against Microsoft to get better pricing on Windows. The winners on that are not going to be Google, it’s going to be the people who make and buy netbooks, especially the ones who get Windows. The existence of Chrome OS will save money for the people who buy Windows.
That’s gotta hurt, if you’re Microsoft.
This is the way to look at Chrome OS. It’s Google’s statement that if Microsoft treads into Google’s yard, Google will tread back, and will do so in a way that does not so much help Google, but hurts Microsoft. It is a counterattack against Microsoft’s core business model that is also a judo move; it uses the weight of Microsoft against it. As Microsoft moves to compete against Google’s services by making a cloud version of Office, Google moves to cut at the base. When (not if) there are customers who use Microsoft apps on Google’s OS, Microsoft is cut twice by the very forces that make Google win when you use a Google service on Windows.
(Also, if you’re Microsoft you could argue that Google has been stepping on their toes with Google Docs, GMail, etc.)
Someday someone’s going to give Ballmer an aneurysm, and it might be Chrome.
While I was running around between the Berkeley Data Breaches conference and SOURCE Boston, Gary McGraw and Brian Chess were releasing the Building Security In Maturity Model.
Lots has been said, so I’d just like to quote one little bit:
One could build a maturity model for software security theoretically (by pondering what organizations should do) or one could build a maturity model by understanding what a set of distinct organizations have already done successfully. The latter approach is both scientific and grounded in the real world, and is the one we followed.
It’s long, but an easy and worthwhile read if you’re thinking of putting together or improving your software security practice.
Incidentally, my boss also commented on our work blog “Building Security In Maturity Model on the SDL Blog.”
We’re pleased to announce version 3.1.4 of the SDL Threat Modeling Tool. A big thanks to all our beta testers who reported issues in the forum!
In this release, we fixed many bugs, learned that we needed a little more flexibility in how we handled bug tracking systems (we’ve added an “issue type” at the bug tracking system level) and updated the template format. You can read more about the tool at the Microsoft SDL Threat Modeling Tool page, or just download 3.1.4.
Unfortunately, we have no effective mitigation for the threat of bad π jokes.
I’m really excited about this release. This is solid software that you can use to analyze all sorts of designs.
Okay, this is a rant.
Cut and paste is broken in most apps today. More specifically, it is paste that is broken. There are two choices in just about every application: “Paste” and “Paste correctly.” Sometimes the latter one is labeled “Paste and Match Style” (Apple) and sometimes “Paste Special” (Microsoft).
However, they have it backwards. Usually, what you want to do is the latter one, which is why I called it “paste correctly.” It is the exception that you want to preserve the fonts, formatting etc. Usually, you want to just paste the damned text in.
I mean, Jesus Hussein Christ, how hard is it to understand that when I go to a web page and copy something and then paste it into my document that I want to use MY fonts, formatting, color, and so on? Even if I do want to preserve those, I ESPECIALLY do not want you to leave my cursor sitting at the end of the paste switched out of whatever my setting I’m using. In the rare occasion that I want paste as it is done today, the keys I type are:
modifier-V ! Paste (modifier is either (ironically) command or control) start typing ! Continue on my merry way modifier-Z ! Oh, crap, I'm no longer in my font, modifier-Z ! I'm in Web2.0Nerd Grotesque 10 light-grey ! undo the typing and the paste space, back-arrow ! Get some room modifier-V ! Paste forward-arrow ! Get back to my formatting (delete) ! Optionally delete the space start typing again ! Now where was I? Oh, yeah....
Note the extra flourish at the end because pasting is so helpful.
The usual sequence I type is:
modifier-V ! Paste modifier-Z ! %$#@*! search Edit menu ! Gawd, where is it, what do they call it? select Paste Correctly ! Oh, there start typing again ! Now where was I? Oh, yeah....
This is much simpler, but has its own headaches. First of all, Microsoft binds their “Paste Special” to control-alt-V and brings up a modal dialog because there are lots of options you could conceivably want, and just wanting to paste the %$#@&* text is so, so special. Apple (whose devos must long for the Knight keyboard) binds it to command-option-shift-V, but at least doesn’t make me deal with Clippy’s dumber cousin. They put “Paste Style” on command-option-V, which pastes into place only the formatting. Oh, yeah, like I do that so often I need a keyboard shortcut.
The upshot is that the user experience here is so bad that the stupid blog editor I’m using here that actually makes me type in my own <p> tags is a more predictable editing experience. I can actually achieve flow while I’m writing.
Most tellingly, the most even, consistent, out-of-my way editing experience is getting to be LaTeX! Yeah, I have to type accents by hand, but at least I don’t lose my train of thought every time I paste.
The solution is simple. Make modifier-V be paste. Just plain old paste. Put paste-with-frosting on control-meta-cokebottle-V and give it a helpful dialog box. Please?
Photo by adam.coulombe.
Gary McGraw has a new podcast, “Reality Check” about software security practitioners. The first episode features Steve Lipner. It’s some good insight into how Microsoft is approaching software security.
I’d say more, but as Steve says two or three good things about my threat modeling tool, you might think it some form of conspiracy.
You should go listen.
Cryptol is a domain specific language for the design, implementation and verification of cryptographic algorithms, developed over the past decade by Galois for the United States National Security Agency. It has been used successfully in a number of projects, and is also in use at Rockwell Collins, Inc.
Cryptol allows a cryptographer to:
- Create a reference specification and associated formal model.
- Quickly refine the specification, in Cryptol, to one or more implementations, trading off space, time, and other performance metrics.
- Compile the implementation for multiple targets, including: C/C++, Haskell, and VHDL/Verilog.
- Equivalence check an implementation against the reference specification, including implementations not produced by Cryptol.
The trial version & docs are here.
First, I think this is really cool. I like domain specific languages, and crypto is hard. I really like equivalence checking between models and code. I had some questions, which I’m not yet able to answer, because the trial version doesn’t include the code generation bits, and in part because I’m trying to vacation a little.
My main question came from the manual, which
First off the manual states: “Cryptol has a very flexible notion of the size of data.” (page number 11, section 2.5) I’d paste a longer quote, but the PDF doesn’t seem to encode spaces well. Which is ironic, because what I was interested in is “does the generated code defend against stack overflows well?” In light of the ability to “[trade] off space, time [etc]” I worry that there are a set of options which translate, transparently, into something bad in C.
I worry about this because as important as crypto is, cryptographers have a lot to consider as they design algorithms and systems. As Michael Howard pointed out, the Tokeneer system shipped with a library that may be from 2001, with 23 possible vulns. It was secure for a set of requirements, and if the requirements for Cryptol don’t contain “resist bad input,” then a lot of systems will be in trouble.
The employer has been posting them at a prodigious rate. There’s: