[Update, Feb 20 2017: More reading: Trump and the ‘Society of the Spectacle’.]
“We’ll have more guards. We’re going to try to have a ‘goat guarantee’ the first weekend,” deputy council chief Helene Åkerlind, representing the local branch of the Liberal Party, told newspaper Gefle Dagblad.
“It is really important that it stays standing in its 50th year,” she added to Arbetarbladet.
Gävle Council has decided to allocate an extra 850,000 kronor ($98,908) to the goat’s grand birthday party, bringing the town’s Christmas celebrations budget up to 2.3 million kronor this year. (“Swedes rally to protect arson-prone yule goat“_
Obviously, what you need to free up that budget is more burning goats. Or perhaps its a credible plan on why spending it will reduce risk. I’m never quite sure.
Image: The goat’s mortal remains, immortalized in 2011 by Lasse Halvarsson.
When I think about how to threat model well, one of the elements that is most important is how much people need to keep in their heads, the cognitive load if you will.
In reading Charlie Stross’s blog post, “Writer, Interrupted” this paragraph really jumped out at me:
One thing that coding and writing fiction have in common is that both tasks require the participant to hold huge amounts of information in their head, in working memory. In the case of the programmer, they may be tracing a variable or function call through the context of a project distributed across many source files, and simultaneously maintaining awareness of whatever complex APIs the object of their attention is interacting with. In the case of the author, they may be holding a substantial chunk of the plot of a novel (or worse, an entire series) in their head, along with a model of the mental state of the character they’re focussing on, and a list of secondary protagonists, while attempting to ensure that the individual sentence they’re currently crafting is consistent with the rest of the body of work.
One of the reasons that I’m fond of diagrams is that they allow the threat modelers to migrate information out of their heads into a diagram, making room for thinking about threats.
Lately, I’ve been thinking a lot about threat modeling tools, including some pretty interesting tools for automated discovery of existing architecture from code. That’s pretty neat, and it dramatically cuts the cost of getting started. Reducing effort, or cost, is inherently good. Sometimes, the reduction in effort is an unalloyed good, that is, any tradeoffs are so dwarfed by benefits as to be unarguable. Sometimes, you lose things that might be worth keeping, either as a hobby like knitting or in the careful chef preparing a fine meal.
I think a lot about where drawing diagrams on a whiteboard falls. It has a cost, and that cost can be high. “Assemble a team of architect, developer, test lead, business analyst, operations and networking” reads one bit of advice. That’s a lot of people for a cross-functional meeting.
That meeting can be a great way to find disconnects in what people conceive of building. And there’s a difference between drawing a diagram and being handed a diagram. I want to draw that out a little bit and ask for your help in understanding the tradeoffs and when they might and might not be appropriate. (Gary McGraw is fond of saying that getting these people in a room and letting them argue is the most important step in “architectural risk analysis.” I think it’s tremendously valuable, and having structures, tools and methods to help them avoid ratholes and path dependency is a big win.)
So what are the advantages and disadvantages of each?
- Collaboration. Walking to the whiteboard and picking up a marker is far less intrusive than taking someone’s computer, or starting to edit a document in a shared tool.
- Ease of use. A whiteboard is still easier than just about any other drawing tool.
- Discovery of different perspective/belief. This is a little subtle. If I’m handed a diagram, I’m less likely to object. An objection may contain a critique of someone else’s work, it may be a conflict. As something is being drawn on a whiteboard, it seems easier to say “what about the debug interface?” (This ties back to Gary McGraw’s point.)
- Storytelling. It is easier to tell a story standing next to a whiteboard than any tech I’ve used. A large whiteboard diagram is easy to point at. You’re not blocking the projector. You can easily edit as you’re talking.
- Messy writing/what does that mean? We’ve all been there? Someone writes something in shorthand as a conversation is happening, and either you can’t read it or you can’t understand what was meant. Structured systems encourage writing a few more words, making things more tedious for everyone around.
- Automatic analysis. Tools like the Microsoft Threat Modeling tool can give you a baseline set of threats to which you add detail. Structure is a tremendous aid to getting things done, and in threat modeling, it helps in answering “what could go wrong?”
- Authority/decidedness/fixedness. This is the other side of the discovery coin. Sometimes, there are architectural answers, and those answers are reasonably fixed. For example, hardware accesses are mediated by the kernel, and filesystem and network are abstracted there. (More recent kernels offer filesystems in userland, but that change was discussed in detail.) Similarly, I’ve seen large, complex systems with overall architecture diagrams, and a change to these diagrams had to be discussed and approved in advance. If this is the case, then a fixed diagram, printed poster size and affixed to walls, can also be used in threat modeling meetings as a context diagram. No need to re-draw it as a DFD.
- Photographs of whiteboards are hard to archive and search without further processing.
- Photographs of whiteboards may imply that ‘this isn’t very important.” If you have a really strong culture of “just barely good enough” than this might not be the case, but if other documents are more structured or cared for, then photos of a whiteboard may carry a message.
- Threat modeling only late. If you’re going to get architecture from code, then you may not think about it until the code is written. If you weren’t going to threat model anyway, then this is a win, but if there was a reasonable chance you were going to do the architectural analysis while there was a chance to change the architecture, software tools may take that away.
(Of course, there are apps that help you take images from a whiteboard and improve them, for example, Best iOS OCR Scanning Apps, which I’m ignoring for purposes of teasing things out a bit. Operationally, probably worth digging into.)
I’d love your thoughts: are there other advantages or disadvantages of a whiteboard or software?
At the RMS blog, we learn they are “Launching a New Journal for Terrorism and Cyber Insurance:”
Natural hazard science is commonly studied at college, and to some level in the insurance industry’s further education and training courses. But this is not the case with terrorism risk. Even if insurance professionals learn about terrorism in the course of their daily business, as they move into other positions, their successors may begin with hardly any technical familiarity with terrorism risk. It is not surprising therefore that, even fifteen years after 9/11, knowledge and understanding of terrorism insurance risk modeling across the industry is still relatively low.
There is no shortage of literature on terrorism, but much has a qualitative geopolitical and international relations focus, and little is directly relevant to terrorism insurance underwriting or risk management.
This is particularly exciting as Gordon Woo was recommended to me as the person to read on insurance math in new fields. His Calculating Catastrophe is comprehensive and deep.
It will be interesting to see who they bring aboard to complement the very strong terrorism risk team on the cyber side.
No, seriously. Articles like “Microsoft Secure Boot key debacle causes security panic” and “Bungling Microsoft singlehandedly proves that golden backdoor keys are a terrible idea” draw on words in an advisory to say that this is all about golden keys and secure boot. This post is not intended to attack anyone; researchers, journalists or Microsoft, but to address a rather inflammatory claim that’s being repeated.
Based on my read of a advisory copy (which I made because I cannot read words on an animated background (yes, I’m a grumpy old man (who uses too many parentheticals (especially when I’m sick)))), this is a nice discovery of an authorization failure.
What they found is:
The “supplemental” policy contains new elements, for the merging conditions. These conditions are (well, at one time) unchecked by bootmgr when loading a legacy policy. And bootmgr of win10 v1511 and earlier certainly doesn’t know about them. To those bootmgrs, it has just loaded in a perfectly valid, signed policy. The “supplemental” policy does NOT contain a DeviceID. And, because they were meant to be merged into a base policy, they don’t contain any BCD rules either, which means that if they are loaded, you can enable testsigning.
That’s a fine discovery and a nice vuln. There are ways Microsoft might have designed this better, I’m going to leave those for another day.
Where the post goes off the rails, in my view, is this:
About the FBI: are you reading this? If you are, then this is a perfect real world example about why your idea of backdooring cryptosystems with a “secure golden key” is very bad! Smarter people than me have been telling this to you for so long, it seems you have your fingers in your ears. You seriously don’t understand still? Microsoft implemented a “secure golden key” system. And the golden keys got released from MS own stupidity. Now, what happens if you tell everyone to make a “secure golden key” system?  (Bracketed numbers added – Adam)
So, , no they did not.  No it didn’t.  Even a stopped clock …
You could design a system in which there’s a master key, and accidentally release that key. Based on the advisory, Microsoft has not done that. (I have not talked to anyone at MS about this issue; I might have talked to people about the overall design, but don’t recall having done so.) What this is is an authorization system with a design flaw. As far as I can tell, no keys have been released.
Look, there are excellent reasons to not design a “golden key” system. I talked about them at a fundamental engineering level in my threat modeling book, and posted the excerpt in “Threat Modeling Crypto Back Doors.”
The typical way the phrase “golden key” is used (albiet fuzzily) is that there is a golden key which unlocks communications. That is a bad idea. This is not that, and we as engineers or advocates should not undercut our position on that bad idea by referring to this research as if it really impacts on that “debate.”
“Better safe than sorry” are the closing words in a NYT story, “A Colorado Town Tests Positive for Marijuana (in Its Water).”
Now, I’m in favor of safety, and there’s a tradeoff being made. Shutting down a well reduces safety by limiting the supply of water, and in this case, they closed a pool, which makes it harder to stay cool in 95 degree weather.
At Wired, Nick Stockton does some math, and says “IT WOULD TAKE A LOT OF THC TO CONTAMINATE A WATER SUPPLY.” (Shouting theirs.)
High-potency THC extract is pretty expensive. One hundred dollars for a gram of the stuff is not an unreasonable price. If this was an accident, it was an expensive one. If this was a prank, it was a financed by Bill Gates…Remember, the highest concentration of THC you can physically get in a liter of water is 3 milligrams.
Better safe than sorry is a tradeoff, and we should talk about it ask such.
Even without drinking the, ummm, kool-aid, this doesn’t pass the giggle test.
I always get a little frisson of engineering joy when I drive over the Tacoma Narrows bridge. For the non-engineers in the audience, the first Tacoma Narrows bridge famously twisted itself to destruction in a 42-mph wind.
The bridge was obviously unstable even during initial construction (as documented in “Catastrophe to Triumph: Bridges of the Tacoma Narrows.”) And so when it started to collapse, several movie cameras were there to document the event, which is still studied and analyzed today.
Today, people are tired of hearing about bridges collapsing. These stories undercut confidence, and bridge professionals are on top of things (ahem). When a bridge collapses, there’s a risk of a lawsuit, and if that was happening, no company could deliver bridges at a reasonable price. We cannot account for the way that wind behaves in the complex fiords of the Puget Sound.
Of course, these are not the excuses of bridge builders, but of security professionals.
I always get a little frisson of engineering joy when I drive over the Tacoma Narrows bridge, and marvel at how we’ve learned from previous failures.
“My father likes to keep some anonymity. It’s who he is. It’s who he is as a person,” Eric Trump said.
It should have been obvious.
(Quote from Washington Post, July 6, 2016).
So I have a very specific question about the “classified emails”, and it seems not to be answered by “Statement by FBI Director James B. Comey on the Investigation of Secretary Hillary Clinton’s Use of a Personal E-Mail System .” A few quotes:
From the group of 30,000 e-mails returned to the State Department, 110 e-mails in 52 e-mail chains have been determined by the owning agency to contain classified information at the time they were sent or received. Eight of those chains contained information that was Top Secret at the time they were sent; 36 chains contained Secret information at the time; and eight contained Confidential information, which is the lowest level of classification. Separate from those, about 2,000 additional e-mails were “up-classified” to make them Confidential; the information in those had not been classified at the time the e-mails were sent.
For example, seven e-mail chains concern matters that were classified at the Top Secret/Special Access Program level when they were sent and received. These chains involved Secretary Clinton both sending e-mails about those matters and receiving e-mails from others about the same matters. There is evidence to support a conclusion that any reasonable person in Secretary Clinton’s position, or in the position of those government employees with whom she was corresponding about these matters, should have known that an unclassified system was no place for that conversation.
Separately, it is important to say something about the marking of classified information. Only a very small number of the e-mails containing classified information bore markings indicating the presence of classified information. But even if information is not marked “classified” in an e-mail, participants who know or should know that the subject matter is classified are still obligated to protect it.
I will state that there is information which is both classified and available to the public. For example, the Snowden documents are still classified, and I have friends with clearances who need to leave conversations when they come up. They are, simultaneously, publicly available. There is a legalistic position that such information is only classified. Such rejection of reality is uninteresting to me.
I can read Comey’s statements two ways. One is that Clinton was discussing Snowden documents, which she likely needed to do as Secretary of State. The other is that she was discussing information which was not both public and classified. My assessment of her behavior is dependent on knowing this.
Are facts available to distinguish between these cases?
Since 2005, this blog has had a holiday tradition of posting “The unanimous Declaration of the thirteen united States of America.” Never in our wildest, most chaotic dreams, did we imagine that the British would one day quote these opening words:
When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature’s God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation. [Ed: That article is jargon-laden, and interesting if you can wade past it.]
So, while it may be chaotic in the most negative of senses, there’d be some succor should we see a succinct success as England secedes from the United Kingdom. Of course, London, West-Virginia-style, secedes from said secession. Obviously, after this, the United Kingdom of Scotland, Northern Ireland and London should remain a part of the EU, dramatically simplifying the negotiation.
Or, perhaps, in light of the many British who were apparently confused about the idea that Leave meant Leave, or the 2% margin of the vote, it would be reasonable and democratic to hold another election to consider what should happen. A problem with democracy is often that a majority, however slim, votes in a way that impacts the rights of a minority, and, whilst we’re waxing philosophic, we would worry were the rights of that minority so dramatically impacted as the result of a non-binding vote. Perhaps a better structure to reduce chaos in the future is two votes, each tied to some super-majority. A first to negotiate, and a second to approve the result.
It doesn’t seem like so revolutionary an idea.
I’m excited to see the call for papers for Passwords 2016.
There are a few exciting elements.
- First, passwords are in a category of problems that someone recently called “garbage problems.” They’re smelly, messy, and no one really wants to get their hands dirty on them.
- Second, they’re important. Despite their very well-known disadvantages, and failure to match any useful security model, and despite l Gates saying that we’d be done with them within the decade, they have advantages, and have been hard to displace.
- Third, they suffer from a common belief that everything to be said has been said.
- Fourth, the conference has a variety of submission types, including academic papers and hacker talks. This is important because there are many security research communities, doing related work, and not talking. Maybe the folks at passwords can add an anonymous track, for spooks and criminals willing to speak on their previously undocumented practices via skype or SnowBot? (Ideally, via the SnowBot, as PoC.)
Studying the real problems which plague us is a discipline that medicine and public health have developed. Their professions have space for everyone to talk about the real problems that they face, and there’s a clear focus on “have we really addressed this plague?”
While it’s fun, and valuable, to go down the memory corruption, crypto math, and other popular topics at security conferences, it’s nicer to see people trying to focus on a real cyber problem that hits every time we look at a system design.
Image: Mary E. Chollet, via Karen Kapsanis.
Bruce Schneier comments on “Apple’s Differential Privacy:”
So while I applaud Apple for trying to improve privacy within its business models, I would like some more transparency and some more public scrutiny.
Do we know enough about what’s being done? No, and my bet is that Apple doesn’t know precisely what they’ll ship, and aren’t answering deep technical questions so that they don’t mis-speak. I know that when I was at Microsoft, details like that got adjusted as we learned from a bigger pile of real data from real customer use informed things. I saw some really interesting shifts surprisingly late in the dev cycle of various products.
I also want to challenge the way Matthew Green closes: “If Apple is going to collect significant amounts of new data from the devices that we depend on so much, we should really make sure they’re doing it right — rather than cheering them for Using Such Cool Ideas.”
But that is a false dichotomy, and would be silly even if it were not. It’s silly because we can’t be sure if they’re doing it right until after they ship it, and we can see the details. (And perhaps not even then.)
But even more important, the dichotomy is not “are they going to collect substantial data or not?” They are. The value organizations get from being able to observe their users is enormous. As product managers observe what A/B testing in their web properties means to the speed of product improvement, they want to bring that same ability to other platforms. Those that learn fastest will win, for the same reasons that first to market used to win.
Next, are they going to get it right on the first try? No. Almost guaranteed. Software, as we learned a long time ago, has bugs. As I discussed in “The Evolution of Secure Things:”
Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.
Green (and Schneier) are right to be skeptical, and may even be right to be cynical. We should not lose sight of the fact that Apple is spending rare privacy engineering resources to do better than Microsoft. Near as I can tell, this is an impressive delivery on the commitment to be the company that respects your privacy, and I say that believing that there will be both bugs and design flaws in the implementation. Green has an impressive record of finding and calling Apple (and others) on such, and I’m optimistic he’ll have happy hunting.
In the meantime, we can, and should, cheer Apple for trying.
C-3PO: Sir, the possibility of successfully navigating an asteroid field is approximately 3,720 to 1.
Han Solo: Never tell me the odds.
I was planning to start this with a C-3PO quote, and then move to a discussion of risk and risk taking. But I had forgotten just how rich a vein George Lucas tapped into with 3PO’s lines in The Empire Strikes Back. So I’m going to talk about his performance prior to being encouraged to find a non-front-line role with the Rebellion.
In case you need a refresher on the plot, having just about run out of options, Han Solo decides to take the known, high risk of flying into an asteroid field. When he does, 3PO interrupts to provide absolutely useless information. There’s nothing about how to mitigate the risk (except surrendering to the Empire). There’s nothing about alternatives. Then 3PO pipes up to inform people that he was previously wrong:
C-3PO: Artoo says that the chances of survival are 725 to 1. Actually Artoo has been known to make mistakes… from time to time… Oh dear…
I have to ask: How useless is that? “My first estimate was off by a factor of 5, you should trust this new one?”
C-3PO: I really don’t see how that is going to help! Surrender is a perfectly acceptable alternative in extreme circumstances! The Empire may be gracious enough to… [Han signals to Leia, who shuts 3PO down.]
Most of the time, being shut down in a meeting isn’t this extreme. But there’s a point in a discussion, especially in high-pressure situations, where the best contribution is silence. There’s a point at which talking about the wrong thing at the wrong time can cost credibility that you’d need later. And while the echo in the dialogue is for comic effect, the response certainly contains a lesson for us all:
C-3PO: The odds of successfully surviving an attack on an Imperial Star Destroyer are approximately…Leia: Shut up!
And the eventual outcome:
C-3PO: Sir, If I may venture an opinion…Han Solo: I’m not really interested in your opinion 3PO.
Does C-3PO resemble any CSOs you know? Any meetings you’ve been in? Successful business people are excellent at thinking about risk. Everything from launching a business to hiring employees to launching a new product line involves risk tradeoffs. The good business people either balance those risks well, or they transfer them away in some way (ideally, ethically). What they don’t want or need is some squeaky-voiced robot telling them that they don’t understand risk.
So don’t be C-3PO. Provide useful input at useful times, with useful options.
Originally appeared in Dark Reading, “Security Lessons from C-3PO, Former CSO of the Millennium Falcon,” as part of a series I’m doing there, “security lessons from..”. So far in the series, lessons from: my car mechanic, my doctor, The Gluten Lie, and my stock broker.
There is a spectre haunting the internet, the spectre of drama. All the powers of the social media have banded together to not fight it, because drama increases engagement statistics like nothing else: Twitter and Facebook, Gawker and TMZ, BlackLivesMatter and GamerGate, Donald Trump and Donald Trump, the list goes on and on.
Where is the party that says we shall not sink to the crazy? Where is the party which argues for civil discourse? The clear, unarguable result is that drama is universally acknowledged to be an amazingly powerful tactic for getting people engaged on your important issue, the exquisite pain which so long you have suffered in silence, and are now compelled to speak out upon! But, reluctantly, draw back, draw breath, draw deep upon your courage before unleashing, you don’t want to, but musts be musts, and so unleashing the hounds of drama, sadly, reluctantly, but…
In this post, I’m going to stop aping the Communist Manifesto, stop aping drama lovers, and discuss some of the elements I see which make up a rhetorical “style guide” for dramatists. I hope that in so doing, I can help build an “immune system” against drama and a checklist for writing well about emotionally-laden issues, rather than a guidebook for creating more. And so I’m going to call out elements and discuss how to avoid them. Drama often includes logical falacies (see also the “informal list” on Wikipedia.) However, drama is not conditioned on such, and one can make an illogical argument without being dramatic. Drama is about the emotional perception of victim, persecutor and rescuer, and how we move from one state to another… “I was only trying to help! Why are you attacking me?!?” (More on that both later in this article, and here.)
Feedback is welcome, especially on elements of the style that I’m missing. I’m going to use a few articles I’ve seen recently, including “Search and Destroy: The Knowledge Engine and the Undoing of Lila Tretikov.” I’ll also use the recent post by Nadeem Kobeissi “A Cry for Help Against Thomas Ptacek, Serial Abuser,” and “What Happened At The Satoshi Roundtable.”
Which brings me to my next point: drama, in and of itself, is not evidence for or against the underlying claims. I have no opinion on the underlying claims of either article. I am simply commenting on their rhetorical style as having certain characteristics which I’ve noticed in drama. Maybe there is a crisis at the Wikimedia Foundation. Maybe Mr. Ptacek really is unfairly mean to Mr. Kobeissi. I’ve met Nadeem once or twice, he seems like a nice fellow, and I’ve talked with Thomas on and off over more than twenty years, but not worked closely with him. Similarly, retweets, outraged follow-on blogs, and the like do not make a set of facts.
Anyway, on to the rhetorical style of drama:
- Big, bold claims which are not justified. Go read the opening paragraphs of the Wikimedia article, and look for evidence. To avoid this, consider the 5 paragraph essay: a summary, paragraphs focused on topics, and a conclusion.
- The missing link. The Wikimedia piece has a number of places where links could easily bolster the argument. For example, “within just the past 48 hours, employees have begun speaking openly on the web” cries out for two or more links. (It’s really tempting to say “Citation needed” here, but I won’t, see the point on baiting, below.) Similarly, Mr. Kobeissi writes that Ptacek is a “obsessive abuser, a bully, a slanderer and an employer of public verbal sexual degradation that he defends, backs down on and does not apologize for.” To avoid this, link appropriately to original sources so people can judge your claims.
- Mixing fact, opinion and impact. If you want to cause drama, present your opinion on the impact of some other party’s actions as a fact. If you want to avoid drama, consider the non-violent communication patterns, such as “when I hear you say X, my response is Y.” For reasons too complex to go into here, this helps break the drama triangle. (I’ll touch more on that below).
- Length. Like this post, drama is often lengthy, and unlike this post, often beautifully written, recursively (or perhaps just repetitively) looping back over the same point, as if volume is correlated with truth. The Wikimedia article seems to go on and on, and perhaps there’s some more detail, causing you to want to keep reading.
- Behaviors that don’t make sense If Johnny had gone straight to the police, none of this would ever had happened. If Mr. Kobeissi had contacted Usenix, they could have had Mr. Ptacek recuse himself from the paper based on evidence of two years of conflict. Mr. Kobeissi doesn’t say why this never happened. Oh, and be prepared to have your story judged.
- Baiting and demands. After presenting a litany of wrongs, there’s a set of demands presented, often very specific ones. Much better to ask “Would you like to resolve this? If so, do you have ideas on how?” Also, “if you care about this, it must be your top priority.”
- False dichotomies. After the facts and opinions, or perhaps mixed in with them, there’s an either/or presented. “This must be true, or he would have sued for libel.” (Perhaps someone doesn’t want to spend tens or hundreds of thousands of dollars on lawyers? Perhaps someone has heard of the Streisand effect? The President doesn’t sue everyone who claims he’s a crypto-Muslim.)
- Unstated assumptions For example, while much of Mr. Kobeissi’s post focuses on last year’s Usenix, that was last year. There’s an unstated assumption that once someone has been on a PC for you, they can’t say mean things about you. And while it would be unprofessional to do so while you’re chairing a conference, how long does that zone extend? We don’t know when Mr. Ptacek was last mean to Mr. Kobeissi. Perhaps he waited a year after being program chair. Mr. Kobeissi probably knows, and he has not told us.
- Failure to assume goodwill, or a mutuality of failure, or that there’s another side to the story. This is the dramatists curse, the inability to conceive or concede that the other person may have a side. Perhaps, once, Mr. Kobeissi was young, immature, and offended Mr. Ptacek in a way which is hard to “put behind us.” We all have such people in our lives. An innocent act or comment is taken the wrong way, irrecoverably.
- With us or against us. It’s a longstanding tool of demagogues to paint the world in black and white. There’s often important shades of grey. To avoid drama, talk about them.
- I’m being soooo reasonable here!. Much like a car salesperson telling you that you can trust them, the dramatic spend a (often a great many words) explaining how reasonable they’re being. If you’re being reasonable, show, don’t tell.
Not all drama will have all of these elements, and it may be that things with all of these elements will not be drama. You should assume goodwill on the part of the people whose words you are reading. Oftentimes, drama is accidental, where someone says something which leaves the other party feeling attacked, a rescuer comes in, and around and around the drama triangle we go.
As I wrote in that article on the drama triangle:
One of the nifty things about this triangle — and one of the things missing from most popular discussion of it — is how the participants put different labels on the roles they are playing.
For example, a vulnerability researcher may perceive themselves as a rescuer, offering valuable advice to a victim of poor coding practice. Meanwhile, the company sees the researcher as a persecutor, making unreasonable demands of their victim-like self. In their response, the company calls their lawyers and becomes a persecutor, and simultaneously allows the rescuer to shift to the role of victim.
A failure to respond to drama does not make the dramatist right. Sometimes the best move is to walk away, even when the claims are demonstrably false, even when they are hurtful. The internet can be a wretched hive of scum and drama, and it’s hard to stay clean when wrestling a pig.
Understanding the rhetorical style of drama so that you don’t get swept up in it can reduce the impact of drama on others. Which is not to say that the issues for which drama is generated do not deserve attention. But perhaps attention and urgency can be generated in a space of civilized discourse. (I’m grateful to Elissa Shevinsky for having used that phrase recently, it seems to have been far from many minds.)
I’ve repeatedly spoken out against “think like an attacker.”
Now I’m going to argue from authority. In this long article, “The Obama Doctrine,” the President of the United States says “The degree of tribal division in Libya was greater than our analysts had expected.”
So let’s think about that statement and what it means. First, it means that the multi-billion dollar analytic apparatus of the United States made a mistake, a serious one about which the President cares, because it impacted his foreign policy. Second, that mistake was about how people think. Third, that group of people was a society, and one that has interacted with the United States since, oh, I don’t know, someone wrote words like “From the halls of Montezuma to the shores of Tripoli.” (And dig the Marines, kickin’ it old skool with that video.) Fourth, it was not a group that attempts to practice operational security in any way.
So if we consider that the analytical capability of the US can get that wrong, do you really want to try to think like Anonymous, think like 61398, like 8200? Are you going to do this perfectly, or are there chances to make mistakes? Alternately, do you want to require everyone who threat models to know how attackers think? Understanding how other people think and prioritize requires a great deal of work. There are entire fields, like anthropology and sociology dedicated to doing it well. Should we start our defense by reading books on the motivational structures of the PLA or the IDF?
The simple fact is, you don’t need to. You can start from what people are building or deploying. (I wrote a book on how.) The second simple fact is repeating that phrase upsets people. When I first joined Microsoft, I used that phrase. One day, a developer grabbed me after a meeting, and politely told me that he didn’t understand it. Oh, wait, this was Microsoft in 2006. He told me I was a fucking idiot and I should give useful advice. After a bit more conversation, he also told me that he had no idea how the fuck an attacker thought, and if I thought he had time to read a book to learn about it, I could write the goddamned features customers pay for while he read.
Every time someone tells me to think like an attacker, I think about that conversation. I appreciate the honesty that the fellow showed, if not his manner. But (as Dave Weinstein pointed out) “A generalized form of this would be ‘Stop giving developers completely un-actionable “guidance”.’ Now, Dave and I worked together at Microsoft, so maybe there’s a similar experience in his past.
Now, this does not mean that we don’t need to pay attention to what real attackers do. It means that we don’t need to walk a mile in their shoes to defend effectively against it.
Previously, “Think Like An Attacker?,” “The Discipline of “think like an attacker”,” and “Think Like An Attacker? Flip that advice!.” [Edited, also previously, at the New School blog: “Modeling Attackers and Their Motives.”]