What do you want to know about SDL Threat Modeling?

Over on my work blog, I asked:

I’m working on a paper about “Experiences Threat Modeling at Microsoft” for an academic workshop on security modeling. I have some content that I think is pretty good, but I realize that I don’t know all the questions that readers might have.

So, what questions should I try to answer in such a paper? What would you like to know about? No promises that I’ll have anything intelligent to say, but I’d love to know the questions you’re asking. So please. Ask away!

Comment here or there.

Call Centers Will Get More Annoying

There’s an article in “destination CRM,” Who’s Really Calling Your Contact Center?

…the identity questions are “based on harder-to-steal information” than public records and credit reports. “This is much closer to the chest than a lot of the public data being used in other authentication systems,” she says, adding that some companies using public data include Acxiom, ChoicePoint, and LexisNexis. Higginson gives the example of asking someone the birth date of an individual who used to share an address with him. “There is no public data source to have a question like that answered,” Higginson says, arguing that it would take multiple documents to try and piece together exactly who the other individual is, where she lives now, verify that she did at one time share an address with the caller — and then still have to verify her birth date.

A couple of comments:

  • This seems tremendously intrusive. I don’t want some random call center drone to think they know “everything” about me, to quiz me about my life, or to tell me that I’m wrong if I disagree with their database.
  • This perpetuates the idea that we are our data shadows. I’m not a line in a database. I am a living, breathing person.
  • Errors in databases, such as those created by ID theft become more damaging to both the customer and your relationship with them.
  • The data being used is likely something like Choicepoint’s Bridger Insight (PDF). Quoting the press release:

    ProID Quiz lets users authenticate customers’ and prospects’ identities with greater certainty. Prior to servicing an account or conducting a transaction, a customer service representative can generate a “quiz” composed of random, multiple-choice questions. The questions are based on “out of wallet” information such as former roommates or one’s previous home builder.

    So access to the Choicepoint database becomes even more valuable to thieves.

A company which deploys these sort of things will lose me as a customer. As Debix points out, your real customer knows who they are. Involve them via multi-factor or multi-channel communications.

More generally, this seems like it would be symptomatic of a company that had lost sight of their customers. Who stops and thinks, “what our customers really want is to be interrogated. That will make them feel better?”

Silver Bullet podcast transcript

silver-bullet-podcast.jpg
I know there’s a lot of people who prefer text to audio. You can skim text much faster. But there are also places where paper or screens are a pain (like on a bus, or while driving). So I’m excited that the Silver Bullet Podcast does both. It’s a huge investment in addressing a variety of use cases.

That all to say you can now read the text of Gary McGraw’s interview of me in PDF form: Adam Shostack on Gary McGraw’s Silver Bullet podcast.

If you missed it, the audio is available at the Silver Bullet site. (Fixed link to point to Silver Bullet.)

Congratulations to the PET Award Winners

pet-award-2008.jpg

Congratulations to Arvind Narayanan and Vitaly Shmatikov! Their paper, “Robust De-Anonymization of Large Sparse Datasets,” has been awarded the 2008 Award for Outstanding Research in Privacy Enhancing Technologies. My employer has a press release which explains how they re-identified data which had been stripped of identifiers in the Netflix dataset. In their acceptance remarks, they mentioned the relevance to the Google-Viacom discussions over how much data would be given to Viacom.

Photo: Nikita Borisov. Shown, from left to right, are Michelle Chibba, of the Ontario Privacy Commissioners Office, presenting the award, Arvind and Vitaly, and Matthew Wright, chair of the award committee, is in the background.

London’s New Transit Card

transport for London.jpg

Transport for London is trying to get as many people as possible to use Oyster Cards. They are cheaper — and theoretically easier to use — than traditional tube / bus tickets. However, using one means that TfL has a record of your journeys on the transport system, which is something that not everybody is comfortable with.

Photo: Voyeur by Jeff VC

Reproducibility, sharing, and data sensitivity

What made this particular work different was that the packets we captured came through a Tor node. Because of this difference, we took extreme caution in managing these traces and have not and will not plan to share them with other researchers.

Response to Tor Study
I won’t get into parsing what “have not and will not plan to share” means, and will simply assume it means “haven’t shared, and will not share”. So, what we have here are data that are not personally identifying, but are sensitive enough that they cannot be shared, ever, with any other researchers.
What is it about the traces that makes them sensitive, then?
Given this policy, how can this work be replicated? How can it be checked for error, if the data are not shared with anyone?

Bonus rant, unrelated to the Tor paper

I am growing increasingly perturbed at the hoarding of data by those who, as scientific researchers, are presumably interested in the free flow of information and the increase of knowledge for the betterment of humanity.
Undoubtedly not all keep this information to themselves out of a base motive such as cranking out as many papers as possible before giving anyone else (especially anyone who might be gunning for that same tenure-track position) a shot, but others who play this game no doubt are. It’s unseemly and ultimately counterproductive.
It’s funny — the infosec community just went through an episode where a respected researcher said, in effect, “trust me — I found something important, but I can’t give you the information to verify my claim, lest it be misused by others less noble than we”, and various luminaries took it to be a sign of lingering institutional immaturity. Perhaps, as EE/CS becomes increasingly cross-pollinated with the likes of sociology, psychology, law and economics the same observation will hold. If so, we should see it coming and do the right things. This is one they teach in pre-school: “Sharing is Caring”.
As an example of what could be done, consider this and this.

Ethics, Information Security Research, and Institutional Review Boards

Several weeks ago, in “A Question of Ethics“, I asked EC readers whether it would be ethical “to deliberately seek out files containing PII as made available via P2P networks”. I had recently read an academic research paper that did just that, and was left conflicted. Part of me wondered whether a review board would pass such a research proposal, or whether the research in the paper was even submitted for review. Another part told me that the information was made publicly available, so my hand-wringing was unwarranted. In the back of my mind, I knew that as information security researchers increasingly used the methods of the social sciences and psychology these ethical considerations would trouble me again.
Through Chris Soghoian’s blog post regarding the ethical and legal perils possibly facing the authors of a paper which describes how they monitored Tor traffic, I realized I was not alone. Indeed, in a brief but cogent paper, Simson Garfinkel describes how even seemingly uncontroversial research activities, such as doing a content analysis on the SPAM one has received, could run afoul of existing human research subject review guidelines.
Garfinkel argues that strict application of rules governing research involving human subjects can provide researchers with incentives to actively work against the desired effect of the reviews. He further suggest thats

society would be better served with broader exemptions that could be automatically applied by researchers without going to an IRB [Institutional Review Board].

My concern at the moment is with the other side of this. I just read a paper which examined the risks of using various package managers. An intrinsic element of the research behind this paper was setting up a mirror for popular packages under false pretenses. I don’t know if this paper was reviewed by an IRB, and I certainly don’t have the expertise needed to say whether it should have been allowed to move forward if it was. However, the fact that deception was used made me uneasy. Maybe that’s just me, but maybe there are nuances that such research is beginning to expose and that we as an emergent discipline should strive to stay on top of.
[Update: The researchers whose Tor study was examined by Soghoian have posted a portion of a review conducted by the University of Colorado:

Based on our assessment and understanding of the issues involved in your work, our opinion was that by any reasonable standard, the work in question was not classifiable as human subject research, nor did it involve the collection of personally identifying information. While the underlying issues are certainly interesting and complex, our opinion is that in this case, no rules were violated by your not having subjected your proposed work to prior IRG scrutiny. Our analysis was confined to this IRG (HRC) issue.

This conclusion is in line with Richard Johnson’s comment below, that this research was not on people, but on network traffic.]

New FISA Analysis

Vox Libertas, a blogger at the Daily Kos has written an analysis of the new US FISA law in his article, “I think I understand the FISA bill. Do I?

Vox Libertas has taken an approach that I can appreciate. On the one hand, many people are unhappy with the telecom immunity. I’m one of them. But people I respect are also saying that it’s a good compromise, and compromise means you don’t get everything you want.

Vox Libertas goes to the trouble of (shock, horror) reading the primary sources and explaining what’s in the new FISA bill. He also shows his own sources.

No matter what you think, this is worth reading.

Breaches & Human Rights in Finland

The European Court of Human Rights has ordered the Finnish government to pay out €34,000 because it failed to protect a citizen’s personal data. One data protection expert said that the case creates a vital link between data security and human rights.

The Court made its ruling based on Article 8 of the European Convention on Human Rights, which guarantees every citizen the right to a private life. It said that it was uncontested that the confidentiality of medical records is a vital component of a private life.

The Court ruled that public bodies and governments will fall foul of that Convention if they fail to keep data private that should be kept private.

The woman in the case did not have to show a wilful publishing or release of data, it said. A failure to keep it secure was enough to breach the Convention.

Data blunders can breach human rights, rules ECHR” on Pinsent Masons Out-Law blog.