The Evolution of Secure Things

One of the most interesting security books I’ve read in a while barely mentions computers or security. The book is Petroski’s The Evolution of Useful Things.

Evolution Of useful Things Book Cover

As the subtitle explains, the book discusses “How Everyday Artifacts – From Forks and Pins to Paper Clips and Zippers – Came to be as They are.”

The chapter on the fork is a fine example of the construction of the book.. The book traces its evolution from a two-tined tool useful for holding meat as it was cut to the 4 tines we have today. Petroski documents the many variants of forks which were created, and how each was created with reference to the perceived failings of previous designs. The first designs were useful for holding meat as you cut it, before transferring it to your mouth with the knife. Later designs were unable to hold peas, extract an oyster, cut pastry, or meet a variety of other goals that diners had. Those goals acted as evolutionary pressures, and drove innovators to create new forms of the fork.

Not speaking of the fork, but rather of newer devices, Petroski writes:

Why designers do not get things right the first time may be more understandable than excusable. Whether electronics designers pay less attention to how their devices will be operated, or whether their familiarity with the electronic guts of their own little monsters hardens them against these monsters’ facial expressions, there is a consensus among consumers and reflective critics like Donald Norman, who has characterized “usable design” as the “next competitive frontier,” that things seldom live up to their promise. Norman states flatly, “Warning labels and large instruction manuals are signs of failures, attempts to patch up problems that should have been avoided by proper design in the first place.” He is correct, of course, but how is it that designers have, almost to a person, been so myopic?

So what does this have to do with security?

(No, it’s not “stick a fork in it, it’s done fer.”)

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

In security, we ask for perfection against an ill-defined and ever-growing list of hard-to-understand properties, such as “double-free safety.”

Computer security is in a process of moving from expressing “security” to expressing more precise goals, and the evolution of useful tools for finding, naming, and discussing vulnerabilities will help us express what we want in secure software.

The various manifestations of failure, as have been articulated in case studies throughout this book, provide the conceptual underpinning for understanding the evolving form of artifacts and the fabric of technology into which they are inextricably woven. It is clearly the perception of failure in existing technology that drives inventors, designers, and engineers to modify what others may find perfectly adequate, or at least usable. What constitutes failure and what improvement is not totally objective, for in the final analysis a considerable list of criteria, ranging from the functional to the aesthetic, from the economic to the moral, can come into play. Nevertheless, each criterion must be judged in a context of failure, which, though perhaps much easier than success to quantify, will always retain an aspect of subjectivity. The spectrum of subjectivity may appear to narrow to a band of objectivity within the confines of disciplinary discussion, but when a diversity of individuals and groups comes together to discuss criteria of success and failure, consensus can be an elusive state.

Even if you’ve previously read it, re-reading it from a infosec perspective is worthwhile. Highly recommended.

[As I was writing this, Ben Hughes wrote a closely related post on the practical importance of tradeoffs, “A Dockery of a Sham.”]

Towards a model of web browser security

One of the values of models is they can help us engage in areas where otherwise the detail is overwhelming. For example, C is a model of how a CPU works that allows engineers to defer certain details to the compiler, rather than writing in assembler. It empowers software developers to write for many CPU architectures at once. Many security flaws happen in areas the models simplify. For example, what if the stack grew away from the stack pointer, rather than towards it? The layout of the stack is a detail that is modeled away.

Information security is a broad industry, requiring and rewarding specialization. We often see intense specialization, which can naturally result in building expertise in silos, and tribal separation of knowledge create more gaps. At the same time, there is a stereotype that generalists end up focused on policy, or “risk management” where a lack of technical depth can hide. (That’s not to say that all risk managers are generalists or that there’s not real technical depth to some of them.)

If we want to enable more security generalists, and we want those generalists to remain grounded, we need to make it easier to learn about new areas. Part of that is good models, part of that is good exercises that appropriately balance challenge to skill level, part of that is the availability of mentoring, and I’m sure there are other parts I’m missing.

I enjoyed many things about Michael Zalewski’s book “The Tangled Web.” One thing I wanted was a better way to keep track of who attacks whom, to help me contextualize and remember the attacks. But such a model is not trivial to create. This morning, motivated by a conversation between Trey Ford and Chris Rohlf, I decided to take a stab at drafting a model for thinking about where the trust boundaries exist.

The words which open my threat modeling book are “all models are wrong, some models are useful.” I would appreciate feedback on this model. What’s missing, and why does it matter? What attacks require showing a new element in this software model?

Browser security
[Update 1 — please leave comments here, not on Twitter]

  1. Fabio Cerullo suggests the layout engine. It’s not clear to me what additional threats can be seen if you add this explicitly, perhaps because I’m not an expert.
  2. Fernando Montenegro asks about network services such as DNS, which I’m adding and also about shared trust (CA Certs), which overlap with a question about supply chain from Mayer Sharma.
  3. Chris Rohlf points out the “web browser protection profile.

I could be convinced otherwise, but think that the supply chain is best addressed by a separate model. Having a secure installation and update mechanism is an important mitigation of many types of bugs, but this model is for thinking about the boundaries between the components.

In reviewing the protection profile, it mentions the following threats:

Threat Comment
Malicious updates Out of scope (supply chain)
Malicious/flawed add on Out of scope (supply chain)
Network eavesdropping/attack Not showing all the data flows for simplicity (is this the right call?)
Data access Local storage is shown

Also, the protection profile is 88 pages long, and hard to engage with. While it provides far more detail and allows me to cross-check the software model, it doesn’t help me think about interactions between components.

End update 1]

Wassenaar Restrictions on Speech

[There are broader critiques by Katie Moussouris of HackerOne at “Legally Blind and Deaf – How Computer Crime Laws Silence Helpful Hackers” and Halvar Flake at “Why changes to Wassenaar make oppression and surveillance easier, not harder.” This post addresses the free speech issue.]

During the first crypto wars, cryptography was regulated under the US ITAR regulations as a dual use item, and to export strong crypto (and thus, economically to include it in a generally available commercial or open source product) was effectively impossible.

A principle of our successful work to overcome those restrictions was that code is speech. Thus restrictions on code are restrictions on speech. The legal incoherence of the regulations was brought to an unavoidable crises by Phil Karn, who submitted both the book Applied Cryptography and a floppy disk with the source code from the book for an export license. The book received a license, the disk did not. This was obviously incoherent and Kafka-esque. At the time, American acceptance of incoherent, Kafka-esque rules was in much shorter supply.

Now, the new Wassenaar rules appear to contain restrictions on the export of a different type of code (page 209, category 4, see after the jump). (FX drew attention to this issue in this tweet. [Apparently, I wrote this in Jan, 2014, and forgot to hit post.])

A principle of our work was that code is speech. Thus restrictions on code are restrictions on speech. (Stop me if you’ve heard this one before.) I put forth several tweets that contain PoC I was able to type from memory, each of which, I believe, in principle, could violate the Wassenaar rules. For example:

  • rlogin -froot $target
  • echo wiz | nc $target 25

It would be nice if someone would file for the paperwork to export them on paper.

In this tweet, I’m not speaking for my employer or yours. I am speaking for poor, tired and hungry cryptographers, yearning to breathe free, and to not live on groundhog day.

Continue reading

Microsoft Backs Laws Forbidding Windows Use By Foreigners

According to Groklaw, Microsoft is backing laws that forbid the use of Windows outside of the US. Groklaw doesn’t say that directly. Actually, they pose charmingly with the back of the hand to the forehead, bending backwards dramatically and asking, “ Why Is Microsoft Seeking New State Laws That Allow it to Sue Competitors For Piracy by Overseas Suppliers? ” Why, why, why, o why, they ask.

The headline of this article is the obvious reason. Microsoft might not know they’re doing it for that reason. Usually, people with the need to do something, dammit because they fear they might be headed to irrelevancy think of something and follow the old Aristotelian syllogism:

Something must be done.
This is something.
Therefore, it must be done.

It’s pure logic, you know. This is exactly how Britney Spears ended up with Laurie Anderson’s haircut and the US got into policing China’s borders. It’s logical, and as an old colleague used to say with a sigh, “There’s no arguing with logic like that.”

Come on, let’s look at what happens. I run a business, and there’s a law that says that if my overseas partners aren’t paying for their Microsoft software, then Microsoft can sue me, what do I do?

Exactly right. I put a clause in the contract that says that they agree not to use any Microsoft software. Duh. That way, if they haven’t paid their Microsoft licenses, I can say, “O, you bad, naughty business partner. You are in breach of our contract! I demand that you immediately stop using Microsoft stuff, or I shall move you from being paid net 30 to net 45 at contract renegotiation time!” End of problem.

And hey, some of my partners will actually use something other than Windows. At least for a few days, until they realize how badly Open Office sucks.

Use crypto. Not too confusing. Mostly asymmetric.

A little ways back, Gunnar Peterson said “passwords are like hamburgers, taste great but kill us in long run wean off password now or colonoscopy later.” I responded: “Use crypto. Not too confusing. Mostly asymmetric.” I’d like to expand on that a little. Not quite so much as Michael Pollan, but a little.

The first sentence, “use crypto” is a simple one. It means more security requires getting away from sending strings as a way to authenticate people at a distance. This applies (obviously) to passwords, but also to SSNs, mother’s “maiden” names, your first car, and will apply to biometrics. Sending a string which represents an image of a fingerprint is no harder to fake than sending a password. Stronger authenticators will need to involve an algorithm and a key.

The second, “not too confusing” is a little more subtle, because there are layers of confusing. There’s developer confusion as the system is implemented, adding pieces, like captchas, without a threat model. There’s user confusion as to what program popped that demand for credentials, what site they’re connecting to, or what password they’re supposed to use. There’s also confusion about what makes a good password when one site demands no fewer than 10 characters and another insists on no more. But regardless, it’s essential that a a strong authentication system be understood by at least 99% of its users, and that the authentication is either mutual or resistant to replay, reflection and man-in-the-middle attacks. In this, “TOFU” is better than PKI. I prefer to call TOFO “persistence” or “key persistence” This is in keeping with Pollan’s belief that things with names are better than things with acronyms.

Finally, “mostly asymmetric.” There are three main building blocks in crypto. They are one way functions, symmetric and asymmetric ciphers. Asymmetric systems are those with two mathematically related keys, only one of which is kept secret. These are better because forgery attacks are harder; because only one party holds a given key. (Systems that use one way functions can also deliver this property.) There are a few reasons to avoid asymmetric ciphers, mostly having to do with the compute capabilities of really small devices like a smartcard or very power limited devices like pacemakers.

So there you have it: Use crypto. Not too confusing. Mostly asymmetric.

SOUPS Keynote & Slides

This week, the annual Symposium on Usable Privacy and Security (SOUPS) is being held on the Microsoft campus. I delivered a keynote, entitled “Engineers Are People Too:”

In “Engineers Are People, Too” Adam Shostack will address an often invisible link in the chain between research on usable security and privacy and delivering that usability: the engineer. All too often, engineers are assumed to have infinite time and skills for usability testing and iteration. They have time to read papers, adapt research ideas to the specifics of their product, and still ship cool new features. This talk will bring together lessons from enabling Microsoft’s thousands of engineers to threat modeling effectively, share some new approaches to engineering security usability, and propose new directions for research.

A fair number of people have asked for the slides, and they’re here: Engineers Are People Too.

Facebook, Here’s Looking at You Kid

The last week and a bit has been bad to Facebook. It’s hard to recall what it was that triggered the avalanche of stories. Maybe it was the flower diagram we mentioned. Maybe it was the New York Times interactive graphic of just how complex it is to set privacy settings on Facebook:


Maybe it was Zuckerberg calling people who trust him “dumb fucks,” or the irony of him telling a journalist that “Having two identities for yourself is an example of a lack of integrity.” Or maybe it was the irony that telling people you believe in privacy while calling them dumb fucks is, really, a better example of a lack of integrity than having two identities.

Maybe it was the Facebook search (try ‘my dui’), or Facebook: The privatization of our Privates and Life in the Company Town. Maybe it was getting on CNN that helped propel it.

It all generated some great discussion like danah boyd’s Facebook and “radical transparency” (a rant). It also generated some not so great ideas like “Poisoning The Well – A Response To Privacy Concerns… ” and “How to protect your privacy from Facebook.” These are differently wrong, and I’ll address them one at a time. First, poisoning the well. I’m a big fan of poisoning the wells of mandatory data collectors. But the goal of Facebook is to connect and share. If you have to poison the data you’re trying to share with your friends, the service is fundamentally broken. Similarly, if you’re so scared of their implicit data collection that you use a different web browser to visit their site, and you only post information you’re willing to see made public, you might as well use more appropriate and specialized sites like Flickr, LinkedIn, Okcupid, Twitter or XBox Live. (I think that covers all the main ways people use Facebook.)

But Facebook’s problems aren’t unique. We’ve heard them before, with sites like Friendster, MySpace, Tribe and Orkut. All followed the same curve of rise, pollution and fall that Facebook is going to follow. It’s inevitable and inherent in the attempt to create a centralized technical implementation of all the myriad ways in which human beings communicate.

Play it Sam…once more, for old time’s sake

I think there are at least four key traps for every single-operator, all-purpose social network.

  1. Friend requests The first big problem is that as everyone you’ve ever had a beer with, along with that kid who beat you up in 3rd grade sends you a friend request, the joy of ‘having lots of friends’ is replaced with the burden of managing lots of ‘friends.’ And as the network grows, so does the burden. Do you really know what that pyronut from college chemistry is up to? Do you want to have to judge the meaning of a conversation in light of today’s paranoia? This leads us to the next problem:
  2. Metaphors Facebook uses two metaphors for relationships: friend and network. Both are now disconnected from their normal English meanings. An f-friend is not the same as a real friend. You might invite a bunch of friends over for drinks. Would you send the same invite to your f-friends list? Similarly, if I were to join Facebook today, I could join a Microsoft network, because I work there (although I’m not speaking for them here). Now, in the time that Facebook has been open to the world, lots of people have gained and lost Microsoft email addresses. Some have been full time employees. Some have been contractors of various types. Some have been fired. Is there a process for managing that? Maybe, we have a large HR department, but I have no idea. One key point is that membership in an f-network is not the same as membership in a real network. The meaning of the words evolve through practice and use. But there’s another issue with metaphors as made concrete through the technical decisions of Facebook programmers: there aren’t enough. I think that there’s also now “fans” available as an official metaphor, but what about salesguy-you-met-at-a-conference-who-won’t-stop bugging-you? The technical options don’t match the nuance with which social beings handle these sorts of questions, and even if they do, telling a computer all that is too much of a bother. (See the chart above for an attempt to make it do something related.)
  3. Privacy means many things Privacy means different things to different people. Even the same person at different times wants very different things, and the costs of figuring out what they will want in some unforeseen future is too high. So privacy issues will keep acting as a thorn in the side for social network systems, and worse for centralized ones.
  4. Different goals Customers & the business have different desires from the system. Customers want fast, free, comprehensive, private, and easy to use. They don’t want to worry about losing their jobs or not getting one. They don’t want to worry about stalkers. They don’t want their sweetie to look over their shoulder and see an ad for diamond rings after talking to their friends about engagement. But hiring managers want to see that embarrassing thing you just said. (Hello, revenue model, although Facebook has not, as far as I know, tapped this one yet.) Stalkers are heavy users who you can show ads to. Advertisers want to show those diamond ring ads. Another example of this is the demand to use your real name. Facebook’s demand that you use your real name is in contrast to 4 of the 5 alternatives up there. Nicknames, psuedonyms, handles, tags are all common all over the web, because, in fact, separating our identities is a normal activity. This is an idea that I talk about frequently. But it’s easier to monetize you if Facebook has your real name.

So I’m shocked, shocked to discover that Facebook is screwed up. A lot of other shocked people are donating to Diaspora ($172,000 of their $10,000 has been pledged. There’s interesting game theory about commitment, delivery on those pledges, and should they just raise a professional round of VC, but this post is already long.) There’s also Appleseed: A Privacy-Centric Facebook Slayer With Working Code.

Now, before I close, I do want to say that I see some of this as self-inflicted, but the underlying arc doesn’t rely on Zuckerberg. It’s not about the folks who work for Zuckerberg, who, for all I know are the smartest, nicest, best looking folks anywhere. It’s about the fundamental model of centralized, all-purpose social networks being broken.

To sum it all up, I’m gonna hand the microphone to Rick:

If you don’t get off that site, you’ll regret it. Maybe not today, maybe not tomorrow, but soon and for the rest of your life. Last night we said a great many things. You said I was to do the thinking for both of us. Well, I’ve done a lot of it since then, and it all adds up to one thing: you’re getting off that Facebook. Now, you’ve got to listen to me! You have any idea what you’d have to look forward to if you stayed here? Nine chances out of ten, we’d both wind up with our privacy in ruins. Isn’t that true, Louie?

Capt. Renault: I’m afraid that Major Zuckerberg will insist.


We show that malicious TeX, BibTeX, and METAPOST files can lead to arbitrary code execution, viral infection, denial of service, and data exfiltration, through the file I/O capabilities exposed by TeX’s Turing-complete macro language. This calls into doubt the conventional wisdom view that text-only data formats that do not access the network are likely safe. We build a TeX virus that spreads between documents on the MiKTeX distribution on Windows XP; we demonstrate data exfiltration attacks on Web-based LaTeX previewer services.

Are Text-Only Data Formats Safe? Or, Use This LaTeX Class File to Pwn Your Computer.,” By Stephen Checkoway, Hovav Shacham, and Eric Rescorla, In Proceedings of LEET 2010. USENIX, Apr. 2010.

As they say “Amusingly, some advocacy documents list ‘no macro viruses’ as an advantage tex has over Word.” Which sorta runs me out of jokes.