Now Available: Control Alt Hack!

Amazon now has copies of Control Alt Hack, the card game that I helped Tammy Denning and Yoshi Kohno create. Complimentary copies for academics and those who won copies at Blackhat are en route.

Control-alt-hack.jpg

From the website:

Control-Alt-Hack™ is a tabletop card game about white hat hacking, based on game mechanics by gaming powerhouse Steve Jackson Games (Munchkin and GURPS).

Age: 14+ years
Players: 3-6
Game Time: Approximately 1 hour

You and your fellow players work for Hackers, Inc.: a small, elite computer security company of ethical (a.k.a., white hat) hackers who perform security audits and provide consultation services. Their motto? “You Pay Us to Hack You.”

Your job is centered around Missions – tasks that require you to apply your hacker skills (and a bit of luck) in order to succeed. Use your Social Engineering and Network Ninja skills to break the Pacific Northwest’s power grid, or apply a bit of Hardware Hacking and Software Wizardry to convert your robotic vacuum cleaner into an interactive pet toy…no two jobs are the same. So pick up the dice, and get hacking!

Please Kickstart Elevation of Privilege

Jan-Tilo Kirchhoff asked on Twitter for a printer (ideally in Germany) to print up some Elevation of Privilege card sets. Deb Richardson then suggested Kickstarter.

I wanted to comment, but this doesn’t fit in a tweet, so I’ll do it here.

I would be totally excited for someone to Kickstarter production of Elevation of Privilege. Letting other people make it, and make money on it, was an explicit goal of the Creative Commons license (CC-BY-3.0) that we selected when we released the game.

So why don’t I just set up a Kickstarter? In short, I think it’s a caesar’s wife issue. I think there’s a risk that it looks bad for me to decide to release things that Microsoft paid me to do, and then make money off of them.

Now, that impacts me. It doesn’t impact anyone else. I would be totally excited for someone else to go make some cards and sell them. I would promote such a thing, and help people find whatever lovely capitalist is doing it. I would be happy to support a Kickstarter campaign, and would be willing to donate some of my time and energy with things like signing decks, doing a training sessions, or whatnot. I even have some joker cards that you could produce as a special bonus item.

So, if you think Elevation of Privilege is cool, please, go take advantage of the license we released it under, and go make money with it.

[Update: I don't have exact numbers, but have seen quotes for quantities around 5,000 decks, production might be around $2-3 a deck. At smaller quantities, you might end up around $5-7 a deck. YMMV. So a Kickstarter in the range of $5-10K would probably be workable, although you'd certainly want to think about shipping and handling costs.]

Threat Modeling and Risk Assessment

Yesterday, I got into a bit of a back and forth with Wendy Nather on threat modeling and the role of risk management, and I wanted to respond more fully.

So first, what was said:

(Wendy) As much as I love Elevation of Privilege, I don’t think any threat modeling is complete without considering probability too.
(me) Thanks! I’m not advocating against risk, but asking when. Do you evaluate bugs 2x? Once in threat model & once in bug triage?
(Wendy) Yes, because I see TM as being important in design, when the bugs haven’t been written in yet. :-)

I think Wendy and I are in agreement that threat modeling should happen early, and that probability is important. My issue is that I think issues discovered by threat modeling are, in reality, dealt with by only a few of Gunnar’s top 5 influencers.

I think there are two good reasons to consider threat modeling as an activity that produces a bug list, rather than a prioritized list. First is that bugs are a great exit point for the activity, and second, bugs are going to get triaged again anyway.

First, bugs are a great end point. An important part of my perspective on threat modeling is that it works best when there’s a clear entry and exit point, that is, when developers know when the threat modeling activity is done. (Window Snyder, who knows a thing or two about threat modeling, raised this as the first thing that needed fixing when I took my job at Microsoft to improve threat modeling.) Developers are familiar with bugs. If you end a strange activity, such as threat modeling, with a familiar one, such as filing bugs, developers feel empowered to take a next step. They know what they need to do next.

And that’s my second point: developers and development organizations triage bugs. Any good development organization has a way to deal with bugs. The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.

So if you expect that bugs will work better then you’re left with the important question that Wendy is raising: when do you consider probability? That’s going to happen in bug triage anyway, so why bother including it in threat modeling? You might prune the list and avoid entering silly bugs. That’s a win. But if you capture your risk assessment process and expertise within threat modeling, then what happens in bug triage? Will the security expert be in the room? Do you have a process for comparing security priority to other priorities? (At Microsoft, we use security bug bars for this, and a sample is here.)

My concern, and the reason I got into a back and forth, is I suspect that putting risk assessment into threat modeling keeps organizations from ensuring that expertise is in bug triage, and that’s risky.

(As usual, these opinions are mine, and may differ from those of my employer.)

[Updated to correct editing issues.]

Elevation of Privilege (Web Edition) Question

Someone wrote to me to ask:

A few cards are not straightforward to apply to a webapp situation (some seem assume a proprietary client) – do you recommend discarding them or perhaps you thought of a way to rephrase them somehow?

For example:

“An attacker can make a client unavailable or unusable but the problem goes away when the attacker stops”

I don’t have a great answer, but I’m thinking someone else might have taken it on.

For Denial of Service attacks in the Microsoft SDL bug bar, we roughly to break things down to a matrix of (server, client, persistent/temporary). That doesn’t seem right for web apps. Is there a better approach, and perhaps even one that can translate into some good threat cards?

Hacker Hide and Seek

Core Security Ariel Waissbein has been building security games for a while now. He was They were kind enough to send a copy of his their “Exploit” game after I released Elevation of Privilege. [Update: I had confused Ariel Futoransky and Ariel Waissbein, because Waissbein wrote the blog post. Sorry!] At Defcon, he and his colleagues will be running a more capture-the-flag sort of game, titled “Hide and seek the backdoor:”

For starters, a backdoor is said to be a piece of code intentionally added to a program to grant remote control of the program — or the host that runs it – to its author, that at the same time remains difficult to detect by anybody else.

But this last aspect of the definition actually limits its usefulness, as it implies that the validity of the backdoor’s existence is contingent upon the victim’s failure to detect it. It does not provide any clue at all into how to create or detect a backdoor successfully.

A few years ago, the CoreTex team did an internal experiment at Core and designed the Backdoor Hiding Game, which mimics the old game Dictionary. In this new game, the game master provides a description of the functionalities of a program, together with the setting where it runs, and the players must then develop programs that fulfill these functionalities and have a backdoor. The game master then mixes all these programs with one that he developed and has no backdoors, and gives these to the players. Then, the players must audit all the programs and pick the benign one.

First, I think this is great, and I look forward to seeing it. I do have some questions. What elements of the game can we evaluate and how? A general question we can ask is “Is the game for fun or to advance the state of the art?” (Both are ok and sometimes it’s unclear until knowledge emerges from the chaos of experimentation.) His blog states “We discovered many new hiding techniques,” which is awesome. Games that are fun and advance the state of the art are very hard to create. It’s a seriously cool achievement.

My next question is, how close is the game to the reality of secure software development? How can we transfer knowledge from one to the other? The rules seem to drive backdoors into most code (assuming they all work, (n-1)/n). That’s unlike reality, with a much higher incidence of backdoors than exist in the wild. I’m assuming that the code will all be custom, and thus short enough to create and audit in a game, which also leads to a higher concentration of backdoors per line of code. That different concentration will reward different techniques from those that could scale to a million lines of code.

More generally, do we know how to evaluate hiding techniques? Do hackers playing a game create the same sort of backdoors as disgruntled employees or industrial spies? Because of this contest and the Underhanded C Contests, we have two corpuses of backdoored code. However, I’m not aware of any corpus of deployed backdoor code which we could compare.

So anyway, I look forward to seeing this game at Defcon, and in the future, more serious games for information security.

Previously, I’ve blogged about the Underhanded C contest here and here