Kale Caesar

According to the CBC: “McDonald’s kale salad has more calories than a Double Big Mac

NewImage

In a quest to reinvent its image, McDonald’s is on a health kick. But some of its nutrient-enhanced meals are actually comparable to junk food, say some health experts.

One of new kale salads has more calories, fat and sodium than a Double Big Mac.

Apparently, McDonalds is there not to braise kale, but to bury it in cheese and mayonnaise. And while that’s likely mighty tasty, it’s not healthy.

At a short-term level, this looks like good product management. Execs want salads on the menu? Someone’s being measured on sales of new salads, and loading them up with tasty, tasty fats. It’s effective at associating a desirable property of salad with the product.

Longer term, not so much. It breeds cynicism. It undercuts the ability of McDonalds to ever change its image, or to convince people that its food might be a healthy choice.

Superbowls

This is a superb owl, but its feathers are ruffled.Superbowl It is certainly not a metaphor.

Speaking of ruffled feathers, apparently there’s a kerfuffle about Super Bowl 1, where the only extant tape is in private hands, and there’s conflict over what to do with it.

One aspect I haven’t seen covered is that 50 years ago, the tape pre-dates the Bern convention and thus is in the era of requiring copyright notice (and registration.) Was the NFL properly copyrighting its game video back then? If not, does that mean that Mr. Haupt can legally do what he wants, and is chilled by for the threat that Big Football would simply throw lawyers at him until he gives up?

Such threats, at odds with our legally guaranteed right to a speedy trial certainly generate a climate in which large organizations, often governmental ones, can use protracted uncertainty as a weapon against oversight or control. Consider if you will the decade-long, Kafka-esque ordeal of Ms Rahinah Ibrahim, who was on the No Fly list due to a mistake. Consider the emotional and personal cost of not being able to either enter the US, or achieve a sense of closure.

Such a lack of oversight is certainly impacting the people of Flint, Michigan. As Larry Rosenthal points out (first comment), even if, sometime down the line the people of Flint win their case, the doubtless slow and extended trials may grind fine, but wouldn’t it be better if we had a justice system that could deliver justice a little faster?

Anyway, what a superb owl that is.

Cybersecurity Lessons from Star Wars: Blame Vader, Not the IT Department

In “The Galactic Empire Has Terrible Cybersecurity,” Alex Grigsby looks at a number of high-profile failures, covered in “A New Hope” and the rest of the Star Wars canon.

Unfortunately, the approach he takes to the Galactic Empire obscures the larger, more dangerous issue is its cybersecurity culture. There are two errors in Grigsby’s analysis, and they are worth examining. As Yoda once said, “Much to learn you still have.”

Grigsby’s first assumption is that more controls leads to better security. But controls need to be deployed judiciously to allow operations to flow. For example, when you have Stormtroopers patrolling in the Death Star, adding layers of access controls may in fact hamper operations. The Shuttle with outdated keys in Return of the Jedi shows that security issues are rampant, and officers are used to escalations. Security processes that are full of routine escalations desensitize people. They get accustomed to saying OK, and are thus unlikely to give their full attention to each escalation.

The second issue is that Grigsby focuses on a few flaws that have massive impact. The lack of encryption and problematic location of the Death Star’s exhaust port matter not so much as one-offs, but rather reveal the larger security culture at play in the Empire.

There is a singular cause for these failures: Darth Vader. His habit of force choking those who have failed him. The culture of terror that he fosters prevents those under his command from learning from their mistakes and ensures that opportunities for learning will be missed; finger-pointing and blame passing will rule the day. Complaints to the Empire’s human resources department will go unanswered and those who made the complaints probably go missing.

This is the precise opposite of the culture created by Etsy—the online marketplace for handmade and vintage items (including these Star Wars cufflinks). Etsy’s engineers engage in what they call “Blameless Post-Mortems and Just Culture,” where people feel safe coming clean about making mistakes so that they can learn from them. After a problem, engineers are encouraged to write up what happened, why it happened, what they learned, and share that knowledge widely. Executives are committed to not placing blame or finger pointing.

The Empire needs a better way to deal with its mistakes, and so do we. Fortunately, we don’t have to fear Lord Vader and can learn from things that have gone wrong.

For example, the DatalossDB, a project of the non-profit Open Security Foundation, has tracked thousands of incidents that involve the loss, theft of exposure of personally-identifiable information since 2008. The Mercatus Center has analyzed Government Accountability Office data, and found upwards of 60,000 incidents per year for the last two years. Sadly, while we know of these incidents, including what sorts of data was taken and how many victims there were, in many of them, we do not know what happened to a degree of detail that allows us to address the problem. In the first years of public breach reporting (roughly starting in 2004), there were a raft of breaches associated with stolen computers, most of them laptops. All commercial operating systems now ship with full disk encryption software as a result. But that may be the only lesson broadly learned so far.

It’s easy to focus on spectacular incidents like the destruction of a Death Star. It’s easy to look to the mythic aspects of the story. It’s harder to understand what went wrong. Was there an architect who brought up the unshielded thermal exhaust port vulnerability? What happened to the engineering change request? What can we learn from that? Did an intrusion detection analyst notice that unauthorized devices were plugged into the network? Were they overwhelmed by a rash of new devices as the new facility was staffed up?

Even given the very largest breaches, there is often a paucity of information about what went wrong. Sometimes, no one wants to know. Sometimes, it’s a set of finger-pointing. Sometimes, whatever went wrong happened long enough ago that there are no logs. The practice of “Five Whys” analysis is rare.

And when, against all odds, an organization digs in and asks what happened, the lawyers are often there to announce that under no circumstances should it be shown to anyone. After all, there will be lawsuits. (While I am not a lawyer, it seems to me that such lawsuits happen regardless of the existence or availability of a post-mortem report, and a good analysis of what went wrong might be seen as evidence of a mature, learning practice.)

What does not happen, given our fear of lawsuits and other phantom menaces, is learning from mistakes. And so R2-D2 plugs into every USB port in sight, and does so for more than twenty years.

We know from a variety of fields including aircraft safety, nuclear safety, and medical safety that high degrees of safety and security are an outcome of just culture, and willingness to discuss what’s gone wrong. Attention to “near misses” allows organizations to learn faster.

This is what the National Transportation Safety Board does when a plane crashes or a train derails.

We need to get better at post-mortems for cybersecurity. We need to publish them so we can learn the analysis methods others are developing. We need to publish them so we can assess if the conclusions are credible. We need to publish them so we can perform statistical analyses. We need to publish them so that we can do science.

There are many reasons to prevaricate. The First Order — the bad guys in The Force Awakens — can’t afford another Death Star, and we cannot afford to keep doing what we’ve been doing and hoping it will magically get better.

It’s not our only hope, but it certainly would be a new hope.

(Originally appeared on the Council on Foreign Relations Net Politics blog.)

Governance Lessons from the Death Star Architect

I had not seen this excellent presentation by the engineer who built the Death Star’s exhaust system.

In it, he discusses the need to disperse energy from a battle station with the power draw to destroy planets, and the engineering goals he had to balance.

I’m reminded again of “The Evolution of Useful Things” and how it applies to security. Security engineering involves making tradeoffs, and those tradeoffs sometimes have unfortunate results. Threat modeling is a family of techniques for thinking about the tradeoffs and what’s likely to go wrong. Doing it well means that things are less likely to go wrong, not that nothing ever will.

It’s easy, after the fact, to point out the problem with the exhaust ports. But as your risk management governance improves, you get to the point of asking “what did we know when we made these decisions?” and “could we have made these decisions better?”

At the engineering level, you want to develop a cybersecurity culture that’s open to discussing failures, not one in which you have to fear being force-choked. (More on that topic in my guest post at the Council on Foreign Relations, “Cybersecurity Lessons from Star Wars: Blame Vader, Not the IT Department.”)


More broadly, organizational leadership needs to focus on questions about appropriate policy and governance being in place. That sounds jargony, so let me unpack it a little. Policy is what you intend to do: such as perform risk analysis that lets executives make good risk management decisions about the competing aspects of the business. Is a PHP vuln acceptable? If it happened to be in the Force Awakened site this week, taking that site down would have been an expensive choice. It’s tempting to ask what geek would do more than add a comment? And that gets into questions of attacker motivation, and it’s easy to get it wrong. Even Star Wars has critics (one minute video, worth sharing for the reveal at the end):

If policy is about knowing what you intend to do in a way that lets people do it, governance is about making sure it happens properly. There are all sorts of reasons that it’s hard to map technology risk to business risk. Tech risk involves the bad things which might happen, and the interesting ways technologies are tightly woven make it hard to say, a priori, that an exhaust port technical issue might have a bad business impact, or that an HVAC system having a bad password might lead to a bad business impact.

Exhaust is likely to generate turbulence in an exhaust shaft, and that such turbulence will act as a compensating control for a lack of port shielding. That is, whatever substrate carries heat will do so unevenly, and in a shaft the size of a womp rat, that turbulence will batter any projectile into exploding somewhere less harmful.

A good policy will ask for such analysis, a good governance process will ask if it happened, and, after a failure, if the failure is likely to happen again. We need to help executives form the questions, and we need to do a better job at supplying answers.

Open Letters to Security Vendors

John Masserini has a set of “open letters to security vendors” on Security Current.

Everyone involved in product or sales at a security startup should read them. John provides insight into what it’s like to be pitched by too many startups, and provides a level of transparency that’s sadly hard to find. Personally, I learned a great deal about what happens when you’re pitched while I was at a large company, and I can vouch for the realities he puts forth. The sooner you understand those realities and incorporate them into your thinking, the more successful we’ll all be.

After meeting with dozens of startups at Black Hat a few weeks ago, I’ve realized that the vast majority of the leaders of these new companies struggle to articulate the value their solutions bring to the enterprise.

Why does John’s advice make us all more successful? Because each organization that follows it moves towards a more efficient state, for themselves and for the folks who they’re pitching.

Getting more efficient means you waste less time per prospect. When you focus on qualified leads who care about the problem you’re working on, you get more sales per unit of time. What’s more, by not wasting the time of those who won’t buy, you free up their time for talking to those who might have something to provide them. (One banker I know said “I could hire someone full-time to reject startup pitches.” Think about what that means for your sales cycle for a moment.)

Go read “An Open Letter to Security Vendors” along with part 2 (why sales takes longer) and part 3 (the technology challenges most startups ignore).

The Evolution of Secure Things

One of the most interesting security books I’ve read in a while barely mentions computers or security. The book is Petroski’s The Evolution of Useful Things.

Evolution Of useful Things Book Cover

As the subtitle explains, the book discusses “How Everyday Artifacts – From Forks and Pins to Paper Clips and Zippers – Came to be as They are.”

The chapter on the fork is a fine example of the construction of the book.. The book traces its evolution from a two-tined tool useful for holding meat as it was cut to the 4 tines we have today. Petroski documents the many variants of forks which were created, and how each was created with reference to the perceived failings of previous designs. The first designs were useful for holding meat as you cut it, before transferring it to your mouth with the knife. Later designs were unable to hold peas, extract an oyster, cut pastry, or meet a variety of other goals that diners had. Those goals acted as evolutionary pressures, and drove innovators to create new forms of the fork.

Not speaking of the fork, but rather of newer devices, Petroski writes:

Why designers do not get things right the first time may be more understandable than excusable. Whether electronics designers pay less attention to how their devices will be operated, or whether their familiarity with the electronic guts of their own little monsters hardens them against these monsters’ facial expressions, there is a consensus among consumers and reflective critics like Donald Norman, who has characterized “usable design” as the “next competitive frontier,” that things seldom live up to their promise. Norman states flatly, “Warning labels and large instruction manuals are signs of failures, attempts to patch up problems that should have been avoided by proper design in the first place.” He is correct, of course, but how is it that designers have, almost to a person, been so myopic?

So what does this have to do with security?

(No, it’s not “stick a fork in it, it’s done fer.”)

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

In security, we ask for perfection against an ill-defined and ever-growing list of hard-to-understand properties, such as “double-free safety.”

Computer security is in a process of moving from expressing “security” to expressing more precise goals, and the evolution of useful tools for finding, naming, and discussing vulnerabilities will help us express what we want in secure software.

The various manifestations of failure, as have been articulated in case studies throughout this book, provide the conceptual underpinning for understanding the evolving form of artifacts and the fabric of technology into which they are inextricably woven. It is clearly the perception of failure in existing technology that drives inventors, designers, and engineers to modify what others may find perfectly adequate, or at least usable. What constitutes failure and what improvement is not totally objective, for in the final analysis a considerable list of criteria, ranging from the functional to the aesthetic, from the economic to the moral, can come into play. Nevertheless, each criterion must be judged in a context of failure, which, though perhaps much easier than success to quantify, will always retain an aspect of subjectivity. The spectrum of subjectivity may appear to narrow to a band of objectivity within the confines of disciplinary discussion, but when a diversity of individuals and groups comes together to discuss criteria of success and failure, consensus can be an elusive state.

Even if you’ve previously read it, re-reading it from a infosec perspective is worthwhile. Highly recommended.

[As I was writing this, Ben Hughes wrote a closely related post on the practical importance of tradeoffs, “A Dockery of a Sham.”]