Diagrams in Threat Modeling

When I think about how to threat model well, one of the elements that is most important is how much people need to keep in their heads, the cognitive load if you will.

In reading Charlie Stross’s blog post, “Writer, Interrupted” this paragraph really jumped out at me:

One thing that coding and writing fiction have in common is that both tasks require the participant to hold huge amounts of information in their head, in working memory. In the case of the programmer, they may be tracing a variable or function call through the context of a project distributed across many source files, and simultaneously maintaining awareness of whatever complex APIs the object of their attention is interacting with. In the case of the author, they may be holding a substantial chunk of the plot of a novel (or worse, an entire series) in their head, along with a model of the mental state of the character they’re focussing on, and a list of secondary protagonists, while attempting to ensure that the individual sentence they’re currently crafting is consistent with the rest of the body of work.

One of the reasons that I’m fond of diagrams is that they allow the threat modelers to migrate information out of their heads into a diagram, making room for thinking about threats.

Lately, I’ve been thinking a lot about threat modeling tools, including some pretty interesting tools for automated discovery of existing architecture from code. That’s pretty neat, and it dramatically cuts the cost of getting started. Reducing effort, or cost, is inherently good. Sometimes, the reduction in effort is an unalloyed good, that is, any tradeoffs are so dwarfed by benefits as to be unarguable. Sometimes, you lose things that might be worth keeping, either as a hobby like knitting or in the careful chef preparing a fine meal.

I think a lot about where drawing diagrams on a whiteboard falls. It has a cost, and that cost can be high. “Assemble a team of architect, developer, test lead, business analyst, operations and networking” reads one bit of advice. That’s a lot of people for a cross-functional meeting.

That meeting can be a great way to find disconnects in what people conceive of building. And there’s a difference between drawing a diagram and being handed a diagram. I want to draw that out a little bit and ask for your help in understanding the tradeoffs and when they might and might not be appropriate. (Gary McGraw is fond of saying that getting these people in a room and letting them argue is the most important step in “architectural risk analysis.” I think it’s tremendously valuable, and having structures, tools and methods to help them avoid ratholes and path dependency is a big win.)

So what are the advantages and disadvantages of each?


  • Collaboration. Walking to the whiteboard and picking up a marker is far less intrusive than taking someone’s computer, or starting to edit a document in a shared tool.
  • Ease of use. A whiteboard is still easier than just about any other drawing tool.
  • Discovery of different perspective/belief. This is a little subtle. If I’m handed a diagram, I’m less likely to object. An objection may contain a critique of someone else’s work, it may be a conflict. As something is being drawn on a whiteboard, it seems easier to say “what about the debug interface?” (This ties back to Gary McGraw’s point.)
  • Storytelling. It is easier to tell a story standing next to a whiteboard than any tech I’ve used. A large whiteboard diagram is easy to point at. You’re not blocking the projector. You can easily edit as you’re talking.
  • Messy writing/what does that mean? We’ve all been there? Someone writes something in shorthand as a conversation is happening, and either you can’t read it or you can’t understand what was meant. Structured systems encourage writing a few more words, making things more tedious for everyone around.

Software Tools

  • Automatic analysis. Tools like the Microsoft Threat Modeling tool can give you a baseline set of threats to which you add detail. Structure is a tremendous aid to getting things done, and in threat modeling, it helps in answering “what could go wrong?”
  • Authority/decidedness/fixedness. This is the other side of the discovery coin. Sometimes, there are architectural answers, and those answers are reasonably fixed. For example, hardware accesses are mediated by the kernel, and filesystem and network are abstracted there. (More recent kernels offer filesystems in userland, but that change was discussed in detail.) Similarly, I’ve seen large, complex systems with overall architecture diagrams, and a change to these diagrams had to be discussed and approved in advance. If this is the case, then a fixed diagram, printed poster size and affixed to walls, can also be used in threat modeling meetings as a context diagram. No need to re-draw it as a DFD.
  • Photographs of whiteboards are hard to archive and search without further processing.
  • Photographs of whiteboards may imply that ‘this isn’t very important.” If you have a really strong culture of “just barely good enough” than this might not be the case, but if other documents are more structured or cared for, then photos of a whiteboard may carry a message.
  • Threat modeling only late. If you’re going to get architecture from code, then you may not think about it until the code is written. If you weren’t going to threat model anyway, then this is a win, but if there was a reasonable chance you were going to do the architectural analysis while there was a chance to change the architecture, software tools may take that away.

(Of course, there are apps that help you take images from a whiteboard and improve them, for example, Best iOS OCR Scanning Apps, which I’m ignoring for purposes of teasing things out a bit. Operationally, probably worth digging into.)

I’d love your thoughts: are there other advantages or disadvantages of a whiteboard or software?

Journal of Terrorism and Cyber Insurance

At the RMS blog, we learn they are “Launching a New Journal for Terrorism and Cyber Insurance:”

Natural hazard science is commonly studied at college, and to some level in the insurance industry’s further education and training courses. But this is not the case with terrorism risk. Even if insurance professionals learn about terrorism in the course of their daily business, as they move into other positions, their successors may begin with hardly any technical familiarity with terrorism risk. It is not surprising therefore that, even fifteen years after 9/11, knowledge and understanding of terrorism insurance risk modeling across the industry is still relatively low.

There is no shortage of literature on terrorism, but much has a qualitative geopolitical and international relations focus, and little is directly relevant to terrorism insurance underwriting or risk management.

This is particularly exciting as Gordon Woo was recommended to me as the person to read on insurance math in new fields. His Calculating Catastrophe is comprehensive and deep.

It will be interesting to see who they bring aboard to complement the very strong terrorism risk team on the cyber side.

What does the MS Secure Boot Issue teach us about key escrow?


No, seriously. Articles like “Microsoft Secure Boot key debacle causes security panic” and “Bungling Microsoft singlehandedly proves that golden backdoor keys are a terrible idea” draw on words in an advisory to say that this is all about golden keys and secure boot. This post is not intended to attack anyone; researchers, journalists or Microsoft, but to address a rather inflammatory claim that’s being repeated.

Based on my read of a advisory copy (which I made because I cannot read words on an animated background (yes, I’m a grumpy old man (who uses too many parentheticals (especially when I’m sick)))), this is a nice discovery of an authorization failure.

What they found is:

The “supplemental” policy contains new elements, for the merging conditions. These conditions are (well, at one time) unchecked by bootmgr when loading a legacy policy. And bootmgr of win10 v1511 and earlier certainly doesn’t know about them. To those bootmgrs, it has just loaded in a perfectly valid, signed policy. The “supplemental” policy does NOT contain a DeviceID. And, because they were meant to be merged into a base policy, they don’t contain any BCD rules either, which means that if they are loaded, you can enable testsigning.

That’s a fine discovery and a nice vuln. There are ways Microsoft might have designed this better, I’m going to leave those for another day.

Where the post goes off the rails, in my view, is this:

About the FBI: are you reading this? If you are, then this is a perfect real world example about why your idea of backdooring cryptosystems with a “secure golden key” is very bad! Smarter people than me have been telling this to you for so long, it seems you have your fingers in your ears. You seriously don’t understand still? Microsoft implemented a “secure golden key” system.[1] And the golden keys got released from MS own stupidity.[2] Now, what happens if you tell everyone to make a “secure golden key” system? [3] (Bracketed numbers added – Adam)

So, [1], no they did not. [2] No it didn’t. [3] Even a stopped clock …

You could design a system in which there’s a master key, and accidentally release that key. Based on the advisory, Microsoft has not done that. (I have not talked to anyone at MS about this issue; I might have talked to people about the overall design, but don’t recall having done so.) What this is is an authorization system with a design flaw. As far as I can tell, no keys have been released.

Look, there are excellent reasons to not design a “golden key” system. I talked about them at a fundamental engineering level in my threat modeling book, and posted the excerpt in “Threat Modeling Crypto Back Doors.”

The typical way the phrase “golden key” is used (albiet fuzzily) is that there is a golden key which unlocks communications. That is a bad idea. This is not that, and we as engineers or advocates should not undercut our position on that bad idea by referring to this research as if it really impacts on that “debate.”

Security Lessons from C-3PO

C3PO telling Han Solo the odds

C-3PO: Sir, the possibility of successfully navigating an asteroid field is approximately 3,720 to 1.

Han Solo: Never tell me the odds.

I was planning to start this with a C-3PO quote, and then move to a discussion of risk and risk taking. But I had forgotten just how rich a vein George Lucas tapped into with 3PO’s lines in The Empire Strikes Back. So I’m going to talk about his performance prior to being encouraged to find a non-front-line role with the Rebellion.

In case you need a refresher on the plot, having just about run out of options, Han Solo decides to take the known, high risk of flying into an asteroid field. When he does, 3PO interrupts to provide absolutely useless information. There’s nothing about how to mitigate the risk (except surrendering to the Empire). There’s nothing about alternatives. Then 3PO pipes up to inform people that he was previously wrong:

C-3PO: Artoo says that the chances of survival are 725 to 1. Actually Artoo has been known to make mistakes… from time to time… Oh dear…

I have to ask: How useless is that? “My first estimate was off by a factor of 5, you should trust this new one?”

C-3PO: I really don’t see how that is going to help! Surrender is a perfectly acceptable alternative in extreme circumstances! The Empire may be gracious enough to… [Han signals to Leia, who shuts 3PO down.]

Most of the time, being shut down in a meeting isn’t this extreme. But there’s a point in a discussion, especially in high-pressure situations, where the best contribution is silence. There’s a point at which talking about the wrong thing at the wrong time can cost credibility that you’d need later. And while the echo in the dialogue is for comic effect, the response certainly contains a lesson for us all:

C-3PO: The odds of successfully surviving an attack on an Imperial Star Destroyer are approximately…

Leia: Shut up!

And the eventual outcome:

C-3PO: Sir, If I may venture an opinion…

Han Solo: I’m not really interested in your opinion 3PO.

Does C-3PO resemble any CSOs you know? Any meetings you’ve been in? Successful business people are excellent at thinking about risk. Everything from launching a business to hiring employees to launching a new product line involves risk tradeoffs. The good business people either balance those risks well, or they transfer them away in some way (ideally, ethically). What they don’t want or need is some squeaky-voiced robot telling them that they don’t understand risk.

So don’t be C-3PO. Provide useful input at useful times, with useful options.

Originally appeared in Dark Reading, “Security Lessons from C-3PO, Former CSO of the Millennium Falcon,” as part of a series I’m doing there, “security lessons from..”. So far in the series, lessons from: my car mechanic, my doctor, The Gluten Lie, and my stock broker.

RSA Planning

Have a survival kit: ricola, Purell, gatorade, advil and antacids can be brought or bought on site.

Favorite talk (not by me): I look forward to Sounil Yu’s talk on “Understanding the Security Vendor Landscape Using the Cyber Defense Matrix.” I’ve seen an earlier version of this, and like the model he’s building a great deal.

Favorite talk I’m giving: “Securing the ‘Weakest Link’.”

A lot of guides, like this one, are not very comprehensive or strategic. John Masserini’s A CISO’s Guide to RSA Conference 2016 is a very solid overview if you’re new, or not getting good value from a conference.

While you’re there, keep notes for a trip report. Sending a trip report helps you remember what happened, helps your boss understand why they spent the money, and helps justify your next trip. I like trip reports that start with a summary, go directly to action items, then a a list of planned meetings and notes on them, followed by detailed and organized notes.

Also while you’re there, remember it’s infosec, and drama is common. Remember the drama triangle and how to avoid it.

Secure Code is Hard, Let’s Make it Harder!

I was confused about why Dan Kaminsky would say CVE-2015-7547 (a bug in glbc’s DNS handling) creates network attack surface for sudo. Chris Rohlf kindly sorted me out by mentioning that there’s now a -host option to sudo, of which I was unaware.

I had not looked at sudo in depth for probably 20 years, and I’m shocked to discover that it has a -e option to invoke an editor, a -p option to process format string bugs, and a -a to allow the invoker to select authentication type(?!?!)

It’s now been a fully twenty years that I’ve been professionally involved in analyzing source code. (These Security Code Review Guidelines were obviously not started in August.) We know that all code has bugs, and more code is strongly correlated with more bugs. I first saw this in the intro to the first edition of Cheswick and Bellovin. I feel a little bit like yelling you kids get off my lawn, but really, the unix philosophy of “do one thing well” was successful for a reason. The goal of sudo is to let the user go through a privilege boundary. It should be insanely simple. [Updated to add, Justin Cormack mentions that OpenBSD went from sudo to doas on this basis.]

It’s not. Not that ssh is simple either, but it isolates complexity, and helps us model attack surface more simply.

Some of the new options make sense, and support security feature sets not present previously. Some are just dumb.

As I wrote this, Dan popped up to say that it also parses /etc/hostname to help it log. Again, do one thing well. Syslog should know what host it’s on, what host it’s transmitting from, and what host its receiving from.

It’s very, very hard to make code secure. When we add in insane options to code, we make it even harder. Sometimes, other people ask us to make the code less secure, and while I’ve already said what I want to say about the FBI asking Apple to fix their mistake by writing new code, this is another example of shooting ourselves in our feet.

Please stop making it harder.

[Update: related “Not-quite-so-broken TLS: lessons in re-engineering a security protocol specification and implementation,” abstracted by the morning paper” which examines an approach to re-implementing TLS, thanks to Steve Bellovin for the pointer.]

Sneak peeks at my new startup at RSA


Many executives have been trying to solve the problem of connecting security to the business, and we’re excited about what we’re building to serve this important and unmet need. If you present security with an image like the one above, we may be able to help.

My new startup is getting ready to show our product to friends at RSA. We’re building tools for enterprise leaders to manage their security portfolios. What does that mean? By analogy, if you talk to a financial advisor, they have tools to help you see your total financial picture: assets and debts. They’ll help you break out assets into long term (like a home) or liquid investments (like stocks and bonds) and then further contextualize each as part of your portfolio. There hasn’t been an easy way to model and manage a portfolio of control investments, and we’re building the first.

If you’re interested, we have a few slots remaining for meetings in our suite at RSA! Drop me a line at [first]@[last].org, in a comment or reach out over linkedin.

The Evolution of Secure Things

One of the most interesting security books I’ve read in a while barely mentions computers or security. The book is Petroski’s The Evolution of Useful Things.

Evolution Of useful Things Book Cover

As the subtitle explains, the book discusses “How Everyday Artifacts – From Forks and Pins to Paper Clips and Zippers – Came to be as They are.”

The chapter on the fork is a fine example of the construction of the book.. The book traces its evolution from a two-tined tool useful for holding meat as it was cut to the 4 tines we have today. Petroski documents the many variants of forks which were created, and how each was created with reference to the perceived failings of previous designs. The first designs were useful for holding meat as you cut it, before transferring it to your mouth with the knife. Later designs were unable to hold peas, extract an oyster, cut pastry, or meet a variety of other goals that diners had. Those goals acted as evolutionary pressures, and drove innovators to create new forms of the fork.

Not speaking of the fork, but rather of newer devices, Petroski writes:

Why designers do not get things right the first time may be more understandable than excusable. Whether electronics designers pay less attention to how their devices will be operated, or whether their familiarity with the electronic guts of their own little monsters hardens them against these monsters’ facial expressions, there is a consensus among consumers and reflective critics like Donald Norman, who has characterized “usable design” as the “next competitive frontier,” that things seldom live up to their promise. Norman states flatly, “Warning labels and large instruction manuals are signs of failures, attempts to patch up problems that should have been avoided by proper design in the first place.” He is correct, of course, but how is it that designers have, almost to a person, been so myopic?

So what does this have to do with security?

(No, it’s not “stick a fork in it, it’s done fer.”)

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

In security, we ask for perfection against an ill-defined and ever-growing list of hard-to-understand properties, such as “double-free safety.”

Computer security is in a process of moving from expressing “security” to expressing more precise goals, and the evolution of useful tools for finding, naming, and discussing vulnerabilities will help us express what we want in secure software.

The various manifestations of failure, as have been articulated in case studies throughout this book, provide the conceptual underpinning for understanding the evolving form of artifacts and the fabric of technology into which they are inextricably woven. It is clearly the perception of failure in existing technology that drives inventors, designers, and engineers to modify what others may find perfectly adequate, or at least usable. What constitutes failure and what improvement is not totally objective, for in the final analysis a considerable list of criteria, ranging from the functional to the aesthetic, from the economic to the moral, can come into play. Nevertheless, each criterion must be judged in a context of failure, which, though perhaps much easier than success to quantify, will always retain an aspect of subjectivity. The spectrum of subjectivity may appear to narrow to a band of objectivity within the confines of disciplinary discussion, but when a diversity of individuals and groups comes together to discuss criteria of success and failure, consensus can be an elusive state.

Even if you’ve previously read it, re-reading it from a infosec perspective is worthwhile. Highly recommended.

[As I was writing this, Ben Hughes wrote a closely related post on the practical importance of tradeoffs, “A Dockery of a Sham.”]

What Good is Threat Intelligence Going to do Against That?

As you may be aware, I’m a fan of using Star Wars for security lessons, such as threat modeling or Saltzer and Schroeder. So I was pretty excited to see Wade Baker post “Luke in the Sky with Diamonds,” talking about threat intelligence, and he gets bonus points for crossover title. And I think it’s important that we see to fixing a hole in their argument.

So…Pardon me for asking, but what good is threat intelligence going to do against that?

In many ways, the diamond that Wade’s built shows a good understanding of the incident. (It may focus overmuch on Jedi Panda, to the practical exclusion of R2-D2, who we all know is the driving force through the movies.) The facts are laid out, they’re organized using the model, and all is well.

Most of my issues boil down to two questions. The first is how could any analysis of the Battle of Yavin fail to mention the crucial role played by Obi Wan Kenobi, and second, what the heck do you do with the data? (And a third, about the Diamond Model itself — how does this model work? Why is a lightsaber a capability, and an X-Wing a bit of infrastructure? Why is The Force counted as a capability, not an adversary to the Dark Side?)

To the first question, that of General Kenobi. As everyone knows, General Kenobi had infiltrated and sabotaged the Death Star that very day. The public breach reports state that “a sophisticated actor” was only able to sabotage a tractor beam controller before being caught, but how do we know that’s all he did? He was on board the station for hours, and could easily have disabled tractor beams that worked in the trenches, or other defenses that have not been revealed. We know that his associate, Yoda, was able to see into the future. We have to assume that they used this ability, and, in using it, created for themselves a set of potential outcomes, only one of which is modeled.

The second question is, okay, we have a model of what went wrong, and what do we do with it? The Death Star has been destroyed, what does all that modeling tell us about the Jedi Panda? About the Fortressa? (Which, I’ll note, is mentioned as infrastructure, but not in the textual analysis.) How do we turn data into action?

Depending on where you stand, it appears that Wade falls into several traps in this post. They are:

  • Adversary modeling and missing something. The analysis misses Ben Kenobi, and it barely touches on the fact that the Rebel Alliance exists. Getting all personal might lead an Imperial Commander to be overly focused on Skywalker, and miss the threat from Lando Calrissian, or other actors, to a second Death Star. Another element which is missed is the relationship between Vader and Skywalker. And while I don’t want to get choked for this, there’s a real issue that the Empire doesn’t handle failure well.
  • Hindsight biases are common — so common that the military has a phenomenon it calls ‘fighting the last war.’ This analysis focuses in on a limited set of actions, the ones which succeeded, but it’s not clear that they’re the ones most worth focusing on.
  • Actionability. This is a real problem for a lot of organizations which get interesting information, but do not yet have the organizational discipline to integrate it into operations effectively.

The issues here are not new. I discussed them in “Modeling Attackers and their Motives,” and I’ll quote myself to close:

Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.

What I don’t know about the Diamond Model is how it does a better job at avoiding the traps and helping those who use it do better than other models. (I’m not saying it’s poor, I’m saying I don’t know and would like to see some empirical work on the subject.)

Adam’s new startup

A conversation with an old friend reminded me that there may be folks who follow this blog, but not the New School blog.

Over there, I’ve posted “Improving Security Effectiveness” about leaving Microsoft to work on my new company:

For the last few months, I’ve been working full time and talking with colleagues about a new way for security executives to measure the effectiveness of security programs. In very important ways, the ideas are new and non-obvious, and at the same time, they’re an evolution of the ideas that Andrew and I wrote about in the New School book that inspired this blog.

and about a job opening, “Seeking a technical leader for my new company:”

We have a new way to measure security effectiveness, and want someone who’ll drive to delivering the technology to customers, while building a great place for developers to ship and deploy important technology. We are very early in the building of the company. The right person will understand such a “green field” represents both opportunity and that we’ll have to build infrastructure as we grow.

This person might be a CTO, they might be a Chief Architect. They are certainly an experienced leader with strong references from peers, management and reports.

An Infosec lesson from the “Worst Play Call Ever”

It didn’t take long for the Seahawk’s game-losing pass to get a label.

But as Ed Felten explains, there’s actually some logic to it, and one of his commenters (Chris) points out that Marshawn Lynch scored in only one of his 5 runs from the one yard line this season. So, perhaps in a game in which the Patriots had no interceptions, it was worth the extra play before the clock ran out.

We can all see the outcome, and we judge, post-facto, the decision on that.

Worst play call ever

In security, we almost never see an outcome so closely tied to a decision. As Jay Jacobs has pointed out, we live in a wicked environment. Unfortunately, we’re quick to snap to judgement when we see a bad outcome. That makes learning harder. Also, we don’t usually get a chance to see the logic behind a play and assess it.

If only we had a way to shorten those feedback loops, then maybe we could assess what the worst play call in infosec might be.

And in fact, despite my use of snarky linkage, I don’t think we know enough to judge Sony or ChoicePoint. The decisions made by Spaltro at Sony are not unusual. We hear them all the time in security. The outcome at Sony is highly visible, but is it the norm, or is it an outlier? I don’t think we know enough to know the answer.

Hindsight is 20/20 in football. It’s easy to focus in on a single decision. But the lesson from Moneyball, and the lesson from Pete Carroll is Really, with no second thoughts or hesitation in that at all.” He has a system, and it got the Seahawks to the very final seconds of the game. And then.

One day, we’ll be able to tell management “our systems worked, and we hit really bad luck.”

[Please keep comments civil, like you always do here.]

IOS Subject Key Identifier?

I’m having a problem where the “key identifier” displayed on my ios device does not match the key fingerprint on my server. In particular, I run:

% openssl x509 -in keyfile.pem -fingerprint -sha1

and I get a 20 byte hash. I also have a 20 byte hash in my phone, but it is not that hash value. I am left wondering if this is a crypto usability fail, or an attack.

Should I expect the output of that openssl invocation to match certificate details on IOS, or is that a different hash? What options to openssl should produce the result I see on my phone?

[update: it also does not match the output or a trivial subset of the output of

% openssl x509 -in keyfile.pem -fingerprint -sha256

% openssl x509 -in keyfile.pem -fingerprint -sha512


[Update 2: iOS displays the “X509v3 Subject Key Identifier”, and you can ask openssl for that via -text, eg, openssl x509 -in pubkey.pem -text. Thanks to Ryan Sleevi for pointing me down that path.]

Think Like An Attacker? Flip that advice!

For many years, I have been saying that “think like an attacker” is bad advice for most people. For example:

Here’s what’s wrong with think like an attacker: most people have no clue how to do it. They don’t know what matters to an attacker. They don’t know how an attacker spends their day. They don’t know how an attacker approaches a problem. Telling people to think like an attacker isn’t prescriptive or clear.

And I’ve been challenging people to think like a professional chef to help them understand why it’s not useful advice. But now, I’ve been one-upped, and, depending on audience, I have a new line to use.

Last week, on Veracode’s blog, Pete Chestna provides the perfect flip of “think like an attacker” to re-frame problems for security people. It’s “think like a developer.” If you, oh great security guru, cannot think like a developer, for heavens sake, stop asking developers to think like attackers.

RSA: Time for some cryptographic dogfood

One of the most effective ways to improve your software is to use it early and often.  This used to be called eating your own dogfood, which is far more evocative than the alternatives. The key is that you use the software you’re building. If it doesn’t taste good to you, it’s probably not customer-ready.  And so this week at RSA, I think more people should be eating the security community’s cryptographic dogfood.

As I evangelize the use of crypto to meet up at RSA, I’ve encountered many problems, such as choice of tool, availability of tool across a set of mobile platforms, cost of entry, etc.  Each of these is predictable, but with dogfooding — forcing myself to ask everyone why they want to use an easily wiretapped protocol — the issues stand out, and the companies that will be successful will start thinking about ways to overcome them.

So this week, as you prep for RSA, spend a few minutes to get some encrypted communications tool. The worst that can happen is you’re no more secure than you were before you read this post.

What to do for randomness today?

In light of recent news, such as “FreeBSD washing Intel-chip randomness” and “alleged NSA-RSA scheming,” what advice should we give engineers who want to use randomness in their designs?

My advice for software engineers building things used to be to rely on the OS to get it right. That defers the problem to a small number of smart people. Is that still the right advice, despite recent news? The right advice is pretty clearly not that a normal software engineer building in Ruby on Rails or asp.net should go and roll their own. It also cannot be that they spend days wading through debates. Experts ought to be providing guidance on what to do.

Is the right thing to hash together the OS and something else? If so, precisely what something else?