Researchers Two-Faced over Facebook Data Release

[Update: Michael Zimmer points out that it wasn’t Facebook, but outside researchers who released the data.]

I wanted to comment quickly on an interesting post by Michael Zimmer, “ On the “Anonymity” of the Facebook Dataset.” He discusses how

A group of researchers have released a dataset of Facebook profile information from a group of college students for research purposes, which I know a lot of people will find quite valuable.

and

Of course, this sounds like an AOL-search-data-release-style privacy disaster waiting to happen. Recognizing this, the researchers detail some of the steps they’ve taken to try to protect the privacy of the subjects, including:

  • All identifying information was deleted or encoded immediately after the data were downloaded.
  • The roster of student names and identification numbers is maintained on a secure local server accessible only by the authors of this study. This roster will be destroyed immediately after the last wave of data is processed.

In the comments, Jason Kaufman implies that the data really isn’t that private, asking what could go wrong, and why would someone post it to Facebook expecting it to remain private.

I have just one question on all of this. If the data isn’t private, why did they attempt to anonymize it?

I believe they attempted to anonymize it because it’s fairly obvious that the data is private, and releasing it with names obviously attached would be pretty shocking. As Michael Zimmer says, “we really need to keep working on a new set of Internet research ethics and methodologies.”

Also, don’t miss Michael Zimmer’s followup post, “More on the anonymity of the Facebook dataset: It’s Harvard College.”

2008 Breaches: More or More Reporting?

Dissent has some good coverage of an announcement from the ID Theft Resource Center, “ITRC: Breaches Blast ’07 Record:”

With slightly more than four months left to go for 2008, the Identity Theft Resource Center (ITRC) has sent out a press release saying that it has already compiled 449 breaches– more than its total for all of 2007.

As they note, the 449 is an underestimate of the actual number of reported breaches, due in part to ITRC’s system of reporting breaches that affect multiple businesses as one incident. This year we have seen a number of such incidents, including Administrative Systems, Inc., two BNY Mellon incidents, SunGard Higher Education, Colt Express Outsourcing, Willis, and the missing GE Money backup tape that reportedly affected 230 companies. Linda Foley, ITRC Founder, informs this site that contractor breaches represent 11% of the 449 breaches reported on their site this year.

I don’t have much to add, but I do have a question: are incidents up, or are more organizations deciding that a report is the right thing to do?

[Update: I wanted to point out interesting responses by Rich Mogull and Dissent.]

That’s an address I haven’t used in a very long time.

Well, I got a letter from BNY Mellon, explaining that they lost my data. The most interesting thing about it, I think, is where it was sent, which is to my mom. (Hi Mom!) I had thought that I’d moved all of my financial statements to an address of my own more than a decade ago. I’ve been meaning to call BNY and ask questions, but haven’t had time.

The letter is dated June 9, regarding a February 27th loss by Archive Systems, Inc. The three-plus month delay annoys me. Archive Systems isn’t named in the letter. I had to look at Data breach at New York bank possibly affecting hundreds of thousands of CT consumers to discover that.

The signup experience for the “Triple Alert Monitoring” from Experian was not awful, but it was pretty poor. It demanded lots of personal information, wasn’t clear how it was going to be used. Experian stuffed a long terms and conditions into a three line at a time scroll box, clearly indicating that they don’t expect anyone to read it. Their web site silently relied on Javascript, and it wasn’t at all clear how long I’m enrolled for. I have little doubt I’ll start getting renewal notices in three months.

Incidentally, I’ve Been Mugged has a review of Triple Alert.

Cleared Traveler Data Lost

Finger on print reader

Verified Identity Pass, Inc., who run the Clear service have lost a laptop containing information of 33,000 customers. According to KPIX in “Laptop Discovery May End SFO Security Scare” the “alleged theft of the unencrypted laptop” lost information including

names, addresses, birth dates and some applicants’ driver’s license numbers and passport information, but does not include applicants’ credit card information or Social Security numbers, according to the company.

We are also told:

The information is secured by two levels of password protection, the company reported.

Two levels of passwords. Wow. I guess you don’t need to encrypt if you have two levels of passwords.

The TSA suspended enrollment of new customers, but existing customers can still use the service. So if you stole the data and can use it, you’re Clear.

Update: They found the device. Chron article here. “It was not in an obvious location,” said a spokesperson.

Breaches & Human Rights in Finland

The European Court of Human Rights has ordered the Finnish government to pay out €34,000 because it failed to protect a citizen’s personal data. One data protection expert said that the case creates a vital link between data security and human rights.

The Court made its ruling based on Article 8 of the European Convention on Human Rights, which guarantees every citizen the right to a private life. It said that it was uncontested that the confidentiality of medical records is a vital component of a private life.

The Court ruled that public bodies and governments will fall foul of that Convention if they fail to keep data private that should be kept private.

The woman in the case did not have to show a wilful publishing or release of data, it said. A failure to keep it secure was enough to breach the Convention.

Data blunders can breach human rights, rules ECHR” on Pinsent Masons Out-Law blog.

Breach notice primary sources

Today on the Dataloss mailing list, a contributor asked whether states in addition to New Hampshire and Maryland make breach notification letters available on-line.
I responded thusly (links added for this blog post):

I know only of NH and MD. NY and NC have been asked to do it, but have no plans to. NJ won’t do it because the reports are held by the state police and not considered public. IN had that provision stripped from their revised law. I saw no evidence that ME has them on-line at the AG’s site. Unless I missed any, those are all the states with central reporting.
I personally have several hundred notices to NY and NC that I am slowly scanning and making available. Unfortunately, my site is off the net for probably a couple weeks.

A later response pointed out that Wisconsin publishes some data as well. Actually, so does New York, but it’s pretty measly.
I forgot to mention in my email that California also considered central reporting — including a web site — as part of an update to its breach law. We blogged about this at the time. I understand these features were cut because of lack of resources.
EC reader Iang made a perspicacious comment at the time:

At some stage we have to think about open governance being run by the people. That is, expect to see some quality control from open institutions, ones that arise for a need. E.g., blogs like this and other aggregators of info.

I am very happy to report that the Open Security Foundation yesterday announced just such a resource. The press release tells the story, but basically it’s crowd-sourcing information on breaches. I am very enthusiastic about getting my primary sources archive back on-line so that I can link with, and otherwise contribute to, this new DataLossDB.

Maryland Breach Notices

Case Number Date Received Business Name No. of MD residents Total breach size Information breached How breach occurred
153504 06/09/08 Argosy University name, social security number, addresses Laptop computer stolen from employee of SunGard Higher Education

Maryland Information Security Breach Notices are put online by the most-forward looking Douglas F. Gansler, attorney general.

I’m glad that they list case IDs on there. We’re getting to the point, what with Attrition.org, Identity Theft resource center, Privacy Rights ClearingHouse, Adam Dodge, Chris Walsh, and probably others I’m forgetting, it’s like chaos out there. We need a ‘CBE’ just to help us all cross-correlate.

Via “I’ve Been Mugged.”

Passport-peeking probably pervasive

Back in March, we wrote about unauthorized access to Barack Obama’s passport file.
At the time, a Washington Post article quoted a State Department spokesman:

“The State Department has strict policies and controls on access to passport records by government and contract employees”

The idea was that, while snooping might occur, it would be caught by controls put in place specifically to detect accesses to the records of high-profile people.
Well, as it turns out the State Department may not be quite as good at detecting such accesses, or at following up (shocking, I know).
In a July 4 article, the Los Angeles Times reports:

A federal investigation of unauthorized snooping into government passport files has found evidence that such breaches may be far more common than previously disclosed, and the State Department inspector general is calling for an overhaul of the program’s management.
In a report issued Thursday, the inspector general found “many control weaknesses” in the department’s administration program, including what investigators said was a lack of sound policies on training staff, accessing electronic records and disciplining workers who break privacy rules.

According to the article, passport files may be viewed by over 20,000 government workers and contractors. In a sample of 150 celebrities chosen for examination by investigators, 85% had been accessed at least once. One was accessed over 100 times (!) in the last six years.
Amusingly, at a press conference held on July 4, State said that half of those who had access in March no longer have it. They also were unable to say whether spot-checks on detected accesses were taking place in the past. Put those together and you have a system where at least twice as many people have access as need it, and privileged operations are recorded but the folks in charge do not know if the audit trail is used.
The redacted report is available at the C-SPAN web site, but not at the State Department’s near as I can tell. Draw your own conclusions.

In the land of the blind..

land-of-the-blind.jpgPCI DSS Position on Patching May Be Unjustified:”

Verizon Business recently posted an excellent article on their blog about security patching. As someone who just read The New School of Information Security (an important book that all information security professionals should read), I thought it was refreshing to see someone take an evidence-based approach to information security controls.

First, thanks Jeff! Second, I was excited by the Verizon report precisely because of what’s now starting to happen. I wrote “Verizon has just catapulted themselves into position as a player who can shape security. That’s because of their willingness to provide data.” Jeff is now using that data to test the PCI standard, and finds that some of its best practices don’t make as much sense the authors of PCI-DSS might have thought.

That’s the good. Verizon gets credibility because Jeff relies on their numbers to make a point. And in this case, I think that Jeff is spot on.

I did want to address something else relating to patching in the Verizon report. Russ Cooper wrote in “Patching Conundrum” on the Verizon Security Blog:

To summarize the findings in our “Control Effectiveness Study”, companies who did a great job of patching (or AV updates) did not have statistically significant less hacking or malicious code experience than companies who said they did an average job of patching or AV updates.

The trouble with this is that the assessment of patching is done by

…[interviewing] the key person responsible for internal security (CSO) in just over 300 companies for which we had already established a multi-year data breach and malcode history. We asked the CSO to rate how well each of dozens of countermeasures were actually deployed in his or her enterprise on a 0 to 5 scale. A score of “zero” meant that the countermeasure was not in use. A score of “5″ meant that the countermeasure was deployed and managed “the best that the CSO could imagine it being deployed in any similar company in the world.” A score of “3″ represented what the CSO considered an average deployment of that particular countermeasure.

So let’s take two CSOs, analytical Alice and boastful Bob. Analytical Alice thinks that her patching program is pretty good. Her organization has strong inventory management, good change control, and rolls out patches well. She listens carefully, and most of her counterparts say similar things. So she gives herself a “3.” Boastful Bob, meanwhile, has exactly the same program in place, but thinks a lot about how hard he’s worked to get those things in place. He can’t imagine anyone having a better process ‘in the real world,’ and so gives himself a 5.

[Update 2: I want to clarify that I didn’t mean that Alice and Bob were unaware of their own state, but that they lack data about the state of many other organizations. Without that data, it’s hard for them to place themselves comparatively.]

This phenomenon doesn’t just impact CSOs. There’s fairly famous research entitled “Unskilled and Unaware of it,” or “Why the Unskilled Are Unaware:”

Five studies demonstrated that poor performers lack insight into their shortcomings even in real world settings and when given incentives to be accurate. An additional meta-analysis showed that it was lack of insight into their errors (and not mistaken assessments of their peers) that led to overly optimistic social comparison estimates among poor performers.

Now, the Verizon study could have overcome this by carefully defining what a 1-5 meant for patching. Did it? We don’t actually know. To be perfectly fair, there’s not enough information in the report to make a call on that. I hope that they’ll make that more clear in the future.

Candidly, though, I don’t want to get wrapped around the axle on this question. The Verizon study (as Jeff Lowder points out) gives us enough data to take on questions which have been opaque. That’s a huge step forward, and in the land of the blind, it’s impressive what a one-eyed man can accomplish. I’m hopeful that as they’ve opened up, we’ll have more and more data, more critiques of that data. It’s how science advances, and despite some mis-givings about the report, I’m really excited by what it allows us to see.

Photo: “In the land of the blind, the one eyed are king” by nandOOnline, and thanks to Arthur for finding it.

[Updated: cleaned up the transition between the halves of the post.]

Iowa breach law arrives a bit early

On May 10, Iowa became the 42nd U.S. state (counting D.C. as a state) with a breach notification law. The law itself is not remarkable. If anything, it is notably weaker than many other states’ laws.
When can we expect to see the last stragglers finally pass their laws? Here’s a plot of each state’s date of law passage, expressed in days since the Choicepoint episode became public. The x-axis is logarithmic.
breachlaws.png
Looks like a decent fit to me. In fact, a tad over under 3% of the variance remains unexplained. Assuming that whatever accounts for this exponential decay remains for a while, the last state should have a law in place October 9, 2011 :^).

Can You Hear Me Now?

Debix, Verizon, the ID Theft Research Center and the Department of Justice have all released really interesting reports in the last few days, and what makes them interesting is their data about what’s going wrong in security.

This is new. We don’t have equivalents of the National Crime Victimization Surveys for cyberspace. We don’t have FBI compiled crime statistics. What we have are lost of people with lots of opinions, making lots of noise. It can be hard to get your message heard over the noise.

Tufte talks about credibility as one important outcome of good visualization. How showing your data effectively can make your case for you. In security, we haven’t shown our work very often. That’s why in the New School, Andrew and I made gather and analyze good data two of our key closing points. Some people have suggested they wanted more specifics, and I’m now glad that we didn’t. This outpouring of data makes this a tremendously exciting time to be in security.

Sharing data gets your voice out there. Verizon has just catapulted themselves into position as a player who can shape security.

That’s because of their willingness to provide data. I was going to say give away, but they’re really not giving the data away. They’re trading it for respect and credibility.

Verizon, we can hear you now. We can also hear Debix, the ITRC and the DoJ. Because they’re buying credibility with their data.


(Disclaimer: I’m a Debix shareholder, and I reviewed a draft of their report.)


[Update: Verizon’s report is getting lots of commentary. Interesting bits from Rich Bejtlich, Chris Wysopal, the Hoff or Slashdot.]

Department of Justice on breach notice

data-breaches-carding-justice.jpg
There’s an important new report out from the Department of Justice, “Data Breaches: What the Underground World of “Carding” Reveals.” It’s an analysis of several cases and the trends in carding and the markets which exist. I want to focus in on one area, which is recommendations around breach notification:

Several bills now before Congress include a national notification standard. In addition to merely requiring notice of a security breach to law enforcement,200 it is also helpful if such laws require victim companies to notify law enforcement prior to mandatory customer notification. This provides law enforcement with the opportunity to delay customer notification if there is an ongoing criminal investigation and such
notification would impede the investigation. Finally, it is also helpful if such laws do not include thresholds for reporting to law enforcement even if certain thresholds – such as the number of customers affected or the likelihood of customer harm — are contained within customer notification requirements. Such thresholds are often premised on the large expense of notifications for the victim entity, the fear of desensitizing customers to breaches, and causing undue alarm in circumstances where customers are unlikely to suffer harm. These reasons have little applicability in the law enforcement setting, however, where notification (to law enforcement) is inexpensive, does not result in reporting fatigue, and allows for criminal investigations even where particular customers were not apparently harmed. (“Data Breaches: What the Underground World of “Carding” Reveals,” Kimberly Kiefer Peretti U.S. Department of Justice, Forthcoming in Volume 25 of the Santa Clara Computer and High Technology Journal, page 28.)

I think such reports should go not only to law enforcement, but to consumer protection agencies. Of course, this sets aside the question of “are these arguments meaningful,” and potentially costs us an ally in the fight for more and better data, but I’m willing to take small steps forward.

Regardless, it’s great to see that the Department of Justice is looking at this as something more than a flash in the pan. They see it as an opportunity to learn.

Paper Breach

The Missing Docs

The BBC reports in “Secret terror files left on train” that an

… unnamed Cabinet Office employee apparently breached strict security rules when he left the papers on the seat of a train.

A fellow passenger spotted the envelope containing the files and gave it to the BBC, who handed them to the police.

We are also told:

Just seven pages long but classified as “UK Top Secret”, this latest intelligence assessment on al-Qaeda is so sensitive that every document is numbered and marked “for UK/US/Canadian and Australian eyes only”, according to our correspondent.

The person who lost them is

… described as a senior male civil servant, works in the Cabinet Office’s intelligence and security unit, which contributes to the work of the Joint Intelligence Committee.

His work reportedly involves writing and contributing to intelligence and security assessments, and that he has the authority to take secret documents out of the Cabinet Office – so long as strict procedures are observed.

Apparently the documents were not encrypted. Cue rimshot.

CSO’s FUD Watch

Introducing FUD Watch:”

Most mornings, I start the work day with an inbox full of emails from security vendors or their PR reps about some new malware attack, software flaw or data breach. After some digging, about half turn out to be legitimate issues while the rest – usually the most alarming in tone – turn out to be threats that have little or no impact on the average enterprise.

The big challenge for security writers is to separate the hot air from the legitimate threats. This column aims to do just that.

But for this to work, audience participation is a must.

I’m highly in favor of reducing the FUD. I hope that Bill Brenner’s efforts will help constrain and shame some of the worst of the FUD. However, it won’t go all the way. Bill admits that he’s working from opinion not data. In The New School, we talk about how we need data on how often various problems actually manifest. When we get that data, we won’t need as much audience participation. In the meantime, go mock the FUDsters.

Does the UK need a breach notice law?

Chris Pounder has an article on the subject:

In summary, most of the important features of USA-style, security breach notification law are now embedded into the guiding Principles of the Data Protection Act. Organisations risk being fined if they carelessly loose personal data or fail to encrypt personal data when they should have done. Individuals are protected because they have simple and free access to the Information Commissioner, who has powers to investigate any complaint and fine. Compensation for aggrieved individuals could arise from any significant security lapse.

In other words, all the features of a security breach notification law are now found in existing data protection legislation. (“Why we don’t need a security breach notification law in the UK.”)

It’s an interesting analysis that breaches are already covered, and I think he’s probably right. However, he’s not certainly right. Attorneys are paid (in part) to argue, and I think most decent attorneys could construct an argument that the law is unclear.

I think there are two strong reasons to support a breach disclosure law: clarity and learning.

The argument for clarity is just that: the law may not be clear, and it will save U.K. organizations money to have a simple, clear law on the subject. (It can’t cost more for notifications, because that cost, according to Pounder, is already present. Similarly, there’s no increase in liability, that cost is already present.) But with a clear law, attorneys can’t charge as much for analysis.

The second reason for a law is to charge a public agency with collecting and sharing information about what happened and why.

As organizations go through this pain, we should learn from it. Not learning from it entails going through it again and again.

There’s a third reason, which is that even in the case of clear law, which exists in the US, only 3 of 21 retailers breached had told their customers. (Based on a Gartner survey, n=50.)

[Gartner analyst Avivah] Litan didn’t know whether the retailers had broken state laws by not informing their customers of the breaches, but she said it was a possibility. Some of the breaches may have happened before applicable state laws were in effect. (“Most Retailer Breaches Are Not Disclosed, Gartner Says.”)

Update: A friend in the UK pointed out privately that I could have been clearer about the evolution of common law, and how decisions establish law. The UK has not yet had many official rulings, and so both the law and practice are evolving rapidly. Their courts and regulators may look to other countries for guidance, and find that prompt notification is essential, both under many US laws and under evolving Canadian jurisprudence. For example, the “[British Columbia Office of Information Privacy Commissioner] says 41 days too long for breach notification.”