The Drama Triangle

As we head into summer conference season, drama is as predictable as vulnerabilities. I’m really not fond of either.

What I am fond of, as someone who spends a lot of time thinking about models, is the model of the “drama triangle.” First discussed by Stephen Karpman, the triangle has three roles, that of victim, persecutor and rescuer:

Drama triangle of victim, rescuer, persecutor

“The Victim-Rescuer-Persecutor Triangle is a psychological model for explaining specific co-dependent, destructive inter-action patterns, which negatively impact our lives. Each position on this triangle has unique, readily identifiable characteristics.” (From “Transcending The Victim-Rescuer-Persecutor Triangle.”)

One of the nifty things about this triangle — and one of the things missing from most popular discussion of it — is how the participants put different labels on the roles they are playing.

For example, a vulnerability researcher may perceive themselves as a rescuer, offering valuable advice to a victim of poor coding practice. Meanwhile, the company sees the researcher as a persecutor, making unreasonable demands of their victim-like self. In their response, the company calls their lawyers and becomes a persecutor, and simultaneously allows the rescuer to shift to the role of victim.

Rescuers (doubtless on Twitter) start popping up to vilify the company’s ham-handed response, pushing the company into perceiving themselves as more of a victim. [Note that I’m not saying that all vulnerability disclosure falls into these traps, or that pressuring vendors is not a useful tool for getting issues fixed. Also, the professionalization of bug finding, and the rise of bug bounty management products can help us avoid the triangle by improving communication, in part by learning to not play these roles.]

I like the “Transcending The Victim-Rescuer-Persecutor Triangle” article because it focuses on how “a person becomes entangled in any one of these positions, they literally keep spinning from one position to another, destroying the opportunity for healthy relationships.”

The first step, if I may, is recognizing and admitting you’re in a drama triangle, and refusing to play the game. There’s a lot more and I encourage you to go read “Transcending The Victim-Rescuer-Persecutor Triangle,” and pay attention to the wisdom therein. If you find the language and approach a little “soft”, then Kellen Von Houser’s “The Drama Triangle: Victims, Rescuers and Persecutors” has eight steps, each discussed in good detail:

  1. Be aware that the game is occurring
  2. Be willing to acknowledge the role or roles you are playing
  3. Be willing to look at the payoffs you get from playing those roles
  4. Disengage
  5. Avoid being sucked into other people’s battles
  6. Take responsibility for your behavior
  7. Breathe

There’s also useful advice at “Manipulation and Relationship Triangles.” I encourage you to spend a few minutes before the big conferences of the summer to think about what the drama triangle means in our professional lives, and see if we can do a little better this year.

Security Lessons from

There’s a great “long read” at CIO, “6 Software Development Lessons From’s Failed Launch.” It opens:

This article tries to go further than the typical coverage of The amazing thing about this story isn’t the failure. That was fairly obvious. No, the strange thing is the manner in which often conflicting information is coming out. Writing this piece requires some archeology: Going over facts and looking for inconsistencies to assemble the best information about what’s happened and pinpoint six lessons we might learn from it.

There’s a lot there, and I liked it even before lesson 6 (“Threat Modeling Matters”). Open analysis is generally better.

There’s a question of why this has to be done by someone like Matthew Heusser. No disrespect is intended, but why isn’t performing these analyses and sharing them? Part of the problem is that we live in an “outrage world” where it’s easier to point fingers and giggle in 140 characters and hurt people’s lives or careers than it is to make a positive contribution.

It would be great to see project analyses and attempts to learn from more projects that go sideways. But it would also be great to see these for security failures. As I asked in “What Happened At OPM,” we have these major hacks, and we learn nothing at all from them. (Or worse, we learn bad lessons, such as “don’t go looking for breaches.”)

The definition of insanity is doing the same thing over and over and hoping for different results. (Which may includes asking the same question or writing the same blog post over and over, which is why I’m starting a company to improve security effectiveness.)

On Language

I was irked to see a tweet “Learned a new word! Pseudoarboricity: the number of pseudoforests needed to cover a graph. Yes, it is actually a word and so is pseudoforest.” The idea that some letter combinations are “actual words” implies that others are “not actual words,” and thus, that there is some authority who may tell me what letter combinations I am allowed to use or understand.

Balderdash. Adorkable balderdash, but balderdash nonetheless.

As any student of Orwell shall recall, the test of language is its comprehensibility, not its adhesion to some standard. As an author, I sometimes hear from people who believe themselves to be authorities, or who believe that they may select for me authorities as to the meanings of words, and who wish to tell me that my use of the word “threat” threatens their understanding, that the preface’s explicit discussion of the many plain meanings of the word is insufficient, or that my sentences are too long, comma-filled, dash deficient or otherwise Oxfordless in a way which seems to cause them to feel superior to me in a way they wish to, at some length, convey.

In fact, on occasion, they are irked. I recommend to them, and to you, “You Are What You Speak.”

I wish them the best, and fall back, if you’ll so allow, to a comment from another master of language, speaking through one of his characters:

‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’
‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’
‘The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.’

The Web We Have to Save

Hossein Derakhshan was recently released from jail in Iran. He’s written a long and thoughtful article “The Web We Have to Save.” It’s worth reading in full, but here’s an excerpt:

Some of it is visual. Yes, it is true that all my posts on Twitter and Facebook look something similar to a personal blog: They are collected in reverse-chronological order, on a specific webpage, with direct web addresses to each post. But I have very little control over how it looks like; I can’t personalize it much. My page must follow a uniform look which the designers of the social network decide for me.

The centralization of information also worries me because it makes it easier for things to disappear. After my arrest, my hosting service closed my account, because I wasn’t able to pay its monthly fee. But at least I had a backup of all my posts in a database on my own web server. (Most blogging platforms used to enable you to transfer your posts and archives to your own web space, whereas now most platforms don’t let you so.) Even if I didn’t, the Internet archive might keep a copy. But what if my account on Facebook or Twitter is shut down for any reason? Those services themselves may not die any time soon, but it would be not too difficult to imagine a day many American services shut down accounts of anyone who is from Iran, as a result of the current regime of sanctions. If that happened, I might be able to download my posts in some of them, and let’s assume the backup can be easily imported into another platform. But what about the unique web address for my social network profile? Would I be able to claim it back later, after somebody else has possessed it? Domain names switch hands, too, but managing the process is easier and more clear— especially since there is a financial relationship between you and the seller which makes it less prone to sudden and untransparent decisions.

But the scariest outcome of the centralization of information in the age of social networks is something else: It is making us all much less powerful in relation to governments and corporations.

Ironically, I tweeted a link, but I think I’m going to try to go back to more blogging, even if the content might fit somewhere else. Hossein’s right. There’s a web here, and we should work to save it.

(Previous mentions of Hossein: Hoder’s Denial“, “Free Hossein Derakhshan.”)

What Happened At OPM?

I want to discuss some elements of the OPM breach and what we know and what we don’t. Before I do, I want to acknowledge the tremendous and justified distress that those who’ve filled out the SF-86 form are experiencing. I also want to acknowledge the tremendous concern that those who employ those with clearances must be feeling. The form is designed as an (inverted) roadmap to suborning people, and now all that data is in the hands of a foreign intelligence service.

The National Journal published A Timeline of Government Data Breaches:
OPM Data Breach

I asked after the root cause, and Rich Bejtlich responded “The root cause is a focus on locking doors and windows while intruders are still in the house” with a pointer to his “Continuous Diagnostic Monitoring Does Not Detect Hackers.”

And while I agree with Richard’s point in that post, I don’t think that’s the root cause. When I think about root cause, I think about approaches like Five Whys or Ishikawa. If we apply this sort of approach then we can ask, “Why were foreigners able to download the OPM database?” There are numerous paths that we might take, for example:

  1. Because of a lack of two-factor authentication (2FA)
  2. Why? Because some critical systems at OPM don’t support 2FA.
  3. Why? Because of a lack of budget for upgrades & testing (etc)

Alternately, we might go down a variety of paths based on the Inspector General Report. We might consider Richard’s point:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because someone there knows how to lock doors and windows.
  3. Why? Because lots of organizations hire out of government agencies.
  4. Why? Because they pay better
  5. [Alternate] Employees don’t like the clearance process

But we can go down alternate paths:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because finding intruders in the house is hard, and people often miss those stealthy attackers.
  3. Why? Because networks are chaotic and change frequently
  4. [Alternate] Because not enough people publish lists of IoCs, so defenders don’t know what to look for.

What I’d really like to see are specific technical facts laid out. (Or heck, if the facts are unknowable because logs rotated or attackers deleted them, or we don’t even know why we can’t know, let’s talk about that, and learn from it.)

OPM and Katherine Archuleta have already been penalized. Let’s learn things beyond dates. Let’s put the facts out there, or, as I quoted in my last post we “should declare the causes which impel them to the separation”, or “let Facts be submitted to a candid world.” Once we have facts about the causes, we can perform deeper root cause analysis.

I don’t think that the OIG report contains those causes. Each of those audit failings might play one of several roles. The failing might have been causal, and fixing it would have stopped the attack. The failing might have been casual and the attacker would have worked around it. The failing might be irrelevant (for example, I’ve rarely seen an authorization to operate prevent an attack, unless you fold it up very small and glue it into a USB port). The failings might even have been distracting, taking attention away from other work that might have prevented the attack.

A collection of public facts would enable us to have a discussion about those possibilities. We could have a meta-conversation about those categorizations of failings, and if there’s other ones which make more sense.

Alternately, we can keep going the way we’re going.

So. What happened at OPM?

Wassenaar Restrictions on Speech

[There are broader critiques by Katie Moussouris of HackerOne at “Legally Blind and Deaf – How Computer Crime Laws Silence Helpful Hackers” and Halvar Flake at “Why changes to Wassenaar make oppression and surveillance easier, not harder.” This post addresses the free speech issue.]

During the first crypto wars, cryptography was regulated under the US ITAR regulations as a dual use item, and to export strong crypto (and thus, economically to include it in a generally available commercial or open source product) was effectively impossible.

A principle of our successful work to overcome those restrictions was that code is speech. Thus restrictions on code are restrictions on speech. The legal incoherence of the regulations was brought to an unavoidable crises by Phil Karn, who submitted both the book Applied Cryptography and a floppy disk with the source code from the book for an export license. The book received a license, the disk did not. This was obviously incoherent and Kafka-esque. At the time, American acceptance of incoherent, Kafka-esque rules was in much shorter supply.

Now, the new Wassenaar rules appear to contain restrictions on the export of a different type of code (page 209, category 4, see after the jump). (FX drew attention to this issue in this tweet. [Apparently, I wrote this in Jan, 2014, and forgot to hit post.])

A principle of our work was that code is speech. Thus restrictions on code are restrictions on speech. (Stop me if you’ve heard this one before.) I put forth several tweets that contain PoC I was able to type from memory, each of which, I believe, in principle, could violate the Wassenaar rules. For example:

  • rlogin -froot $target
  • echo wiz | nc $target 25

It would be nice if someone would file for the paperwork to export them on paper.

In this tweet, I’m not speaking for my employer or yours. I am speaking for poor, tired and hungry cryptographers, yearning to breathe free, and to not live on groundhog day.

Continue reading

Conference Etiquette: What’s New?

So Bill Brenner has a great article on “How to survive security conferences: 4 tips for the socially anxious
.” I’d like to stand by my 2010 guide to “Black Hat Best Practices,” and augment it with something new: a word on etiquette.

Etiquette is not about what fork you use (start from the outside, work in), or an excuse to make you uncomfortable because you forgot to call the Duke “Your Grace.” It’s a system of tools to help otherwise awkward social interactions go more smoothly.

We all meet a lot of people at these conferences, and there’s some truth behind the stereotype that people in technology are bad at “the people skills.” Sometimes, when we see someone, there will be recognition, but the name and full context doesn’t come rushing back. That’s an awkward moment, and it’s worth thinking about the etiquette involved.

When you know you’ve met someone and can’t recall the details, it’s rude to say “remind me who you are,” and so people will do a bunch of things to politely encourage reminders. For example, they’ll say “what’s new” or “what have you been working on lately?” Answers like “nothing new” or “same old stuff” are not helpful to the person who asked. This is an invitation to talk about your work. Even if you haven’t done anything new that’s ready to talk about, you can say something like “I’m still exploring the implications of the work I did on X” or “I’ve wrapped up my project on Y, and I’m looking for a new thing to go frozzle.” If all your work is secret, you can say “Oh, still at DoD, doing stuff for Uncle Sam.”

Whatever your answer will be, it should include something to help people remember who you are.

Why not give it a try this RSA?

BTW, you can get the best list of RSA parties where you can yell your answers to such questions at “RSA Parties Calendar.”

Boyd Video: Patterns of Conflict

John Boyd’s ideas have had a deep impact on the world. He created the concept of the OODA Loop, and talked about the importance of speed (“getting inside your opponent’s loop”) and orientation, and how we determine what’s important.

A lot of people who know about the work of John Boyd also know that he rarely took the time to write. His work was constantly evolving, and for many years, the work existed as scanned photocopies of acetate presentation slides.

In 2005, Robert Coram published a book (which I reviewed here and in that review, I said:

His writings are there to support a presentation; many of them don’t stand well on their own. Other writers present his ideas better than he did. But they don’t think with the intensity, creativity, or rigor that he brought to his work.

I wasn’t aware that there was video of him presenting, but Jasonmbro has uploaded approximately 5 hours of Boyd presenting his Patterns of Conflict briefing. The audio is not great, but it’s not unusable. There’s an easy to read version of that slide collection here. (Those slides are a little later than the video, and so may not line up perfectly.)

An Infosec lesson from the “Worst Play Call Ever”

It didn’t take long for the Seahawk’s game-losing pass to get a label.

But as Ed Felten explains, there’s actually some logic to it, and one of his commenters (Chris) points out that Marshawn Lynch scored in only one of his 5 runs from the one yard line this season. So, perhaps in a game in which the Patriots had no interceptions, it was worth the extra play before the clock ran out.

We can all see the outcome, and we judge, post-facto, the decision on that.

Worst play call ever

In security, we almost never see an outcome so closely tied to a decision. As Jay Jacobs has pointed out, we live in a wicked environment. Unfortunately, we’re quick to snap to judgement when we see a bad outcome. That makes learning harder. Also, we don’t usually get a chance to see the logic behind a play and assess it.

If only we had a way to shorten those feedback loops, then maybe we could assess what the worst play call in infosec might be.

And in fact, despite my use of snarky linkage, I don’t think we know enough to judge Sony or ChoicePoint. The decisions made by Spaltro at Sony are not unusual. We hear them all the time in security. The outcome at Sony is highly visible, but is it the norm, or is it an outlier? I don’t think we know enough to know the answer.

Hindsight is 20/20 in football. It’s easy to focus in on a single decision. But the lesson from Moneyball, and the lesson from Pete Carroll is Really, with no second thoughts or hesitation in that at all.” He has a system, and it got the Seahawks to the very final seconds of the game. And then.

One day, we’ll be able to tell management “our systems worked, and we hit really bad luck.”

[Please keep comments civil, like you always do here.]

The Unexpected Meanings of Facebook Privacy Disclaimers

Paul Gowder has an interesting post over at Prawfblog, “In Defense of Facebook Copyright Disclaimer Status Updates (!!!).” He presents the facts:

…People then decide that, hey, goose, gander, if Facebook can unilaterally change the terms of our agreement by presenting new ones where, theoretically, a user might see them, then a user can unilaterally change the terms of our agreement by presenting new ones where, theoretically, some responsible party in Facebook might see them. Accordingly, they post Facebook statuses declaring that they reserve all kinds of rights in the content they post to Facebook, and expressly denying that Facebook acquires any rights to that content by virtue of that posting.

Before commenting on his analysis, which is worth reading in full, there’s an important takeaway, which is that even on Facebook, and even with Facebook’s investment in making their privacy controls more usable, people want more privacy while they’re using Facebook. Is that everyone? No, but it’s enough for the phenomenon of people posting these notices to get noticed.

His analysis instead goes to what we can learn about how people see the law:

To the contrary, I think the Facebook status-updaters reflect both cause for hope and cause for worry about our legal system. The cause for worry is that the system does seem to present itself as magic words. The Facebook status updates, like the protests of the sovereign citizens (but much more mainstream), seem to me to reflect a serious alienation of the public from the law, in which the law isn’t rational, or a reflection of our collective values and ideas about how we ought to treat one another and organize our civic life. Instead, it’s weaponized ritual, a set of pieces of magic paper or bits on a computer screen, administered by a captured priesthood, which the powerful can use to exercise that power over others. With mere words, unhinged from any semblance of autonomy or agreement, Facebook can (the status-updaters perceive) whisk away your property and your private information. This is of a kind with the sort of alienation that I worried about over the last few posts, but in the civil rather than the criminal context: the perception that the law is something done to one, rather than something one does with others as an autonomous agent as well as a democratic citizen. Whether this appears in the form of one-sided boilerplate contracts or petty police harassment, it’s still potentially alienating, and, for that reason, troubling.

This is spot-on. Let me extend it. These “weaponized rituals” are not just at the level of the law. Our institutions are developing anti-bodies to unscripted or difficult to categorize human participation, because engaging with human participation is expensive to deliver and inconvenient to the organization. We see this in the increasingly ritualized engagement with the courts. Despite regular attempts to make courts operate in plain English, it becomes a headline when “Prisoner wins Supreme Court case after submitting handwritten petition.” (Yes, the guy’s apparently otherwise a jerk, serving a life sentence.) Comments to government agencies are now expected to follow a form (and regular commenters learn to follow it, lest their comments engage the organizational anti-bodies on procedural grounds). When John Oliver suggested writing to the FCC, its systems crashed and they had to extend the deadline. Submitting Freedom of Information requests to governments, originally meant to increase transparency and engagement, has become so scripted that there are web sites to track your requests and departmental failures to comply with the statuatory timelines. We have come to accept that our legislators and regulators are looking out for themselves, and no longer ask them to focus on societal good. We are pleasantly surprised when they pay more than lip service to anything beyond their agency’s remit. In such a world, is it any surprise that most people don’t bother to vote?

Such problems are not limited to the law. We no longer talk to the man in the gray flannel suit, we talk to someone reading from a script he wrote. Our interactions with organizations are fenceposted by vague references to “policy.” Telephone script-readers are so irksome to deal with that we all put off making calls, because we know that even asking for a supervisor barely helps. (This underlies why rage-tweeting can actually help cut red tape; it summons a different department to try to work your way through a problem created by intra-organizational shuffling of costs.) Sometimes the references to policy are not vague, but precise, and the precision itself is a cost-shifting ritual. By demanding a form that’s convenient to itself, an organization can simultaneously call for engagement while making that engagement expensive and frustrating. When engaging requires understanding the the system as well as those who are immersed in it, engagement is discouraged. We can see this at Wikipedia, for example, discussed in a blog post like “The Closed, Unfriendly World of Wikipedia.” Wikipedia has evolved a system for managing disputes, and that system is ritualized. Danny Sullivan doesn’t understand why they want him to jump through hoops and express himself in the way that makes it easy for them to process.

Such ritualized forms of engagement display commitment to the organization. This can inform our understanding of how social engineers work. Much of their success at impersonating employees comes from being fluid in the use of a victim’s jargon, and in the 90s, much of what was published in 2600 was lists of Ma Bell’s acronyms or descriptions of operating procedures. People believe that only an employee would bother to learn such things, and so learning such things acts as an authenticator in ways that infuriate technical system designers.

What Gowder calls rituals can also be viewed as protocols (or protocol messages). They are the formalized, algorithm friendly, state-machine altering messages, and thus we’ll see more of them.

Such growth makes systems brittle, as they focus on processing those messages and not others. Brittle systems break in chaotic and often ugly ways.

So let me leave this with a question: how can we design systems which scale without becoming brittle, and also allow for empathy?