On the TJX Breach

tj-maxx-hacked.jpgSo there’s been a stack of news stories on TJX and the issues with their database. I want to comment on an aspect of the story not getting a lot of coverage. In the Cinciannati Enquirer story, “Fifth Third has role in TJX hole,” Mike Cook is quoted as saying “If you are a consumer and you’re part of the TJX breach, you are hoping it’s 10 million people because the chance of your name being misused goes down considerably depending on the size of the data breach.”

I don’t buy it. What we’re doing is telling criminals they need to scale up their exploit techniques and networks. We did that with spamming and phishing. Bad idea.

Some other news tidbits I found interesting:

It’s my understanding that the shopping bags in the photo aren’t full of clothes. (Photo from here, original context unclear.)

[Update: by ‘these things’ I was intending to imply not only credit card issues, but the gamut of information security issues that might arise. If you think we do have economic advice to give, consider submitting a paper to the workshop on the economics of information security: they explicitly ask for papers on ‘optimal security investment’]

11 thoughts on “On the TJX Breach

  1. “No cost-effective ways to prevent…”
    Yes there are. It’s called PCI DSS, the merchant agreement they signed a while ago, and a few other documents. All of them state that they MUST NOT save track 2 data, and MUST NOT save CVV for ANY reason. That’s extremely cheap – you have no right to save the data, problem solved.
    Except that the people who sign the contracts are rarely the people who implement these systems. Until someone has the balls to chop off a major retailer’s access to credit card facilities, this type of problem will occur again and again.
    If you see a shop double swipe you, ask them to void the transaction and report them to your bank. This is a merchant agreement breach, and they CAN have their CC facilities terminated for it. Unlikely, but it’s important that as consumer’s we do not allow our cards to be read by insecure devices. There are a lot of controls put into ensuring that PINS and track 2 data is not stored in an authorized device, and there’s a good reason for it.
    Plus, at OWASP, we’ve had advice for 2+ years in the OWASP Guide that they shouldn’t be doing this stuff amongst other things (in the heading “Handling CC numbers securely”. It’s free and easy to get. PCI DSS is also free and easy to get. There is NO excuse for this kindergarten level attack, particularly from larger corps who can afford to be reviewed and more importantly, can afford to remediate the issues.
    Andrew

  2. Andrew, you have indeed proven the original point:

    No cost-effective ways to prevent…

    What you wrote doesn’t work, by your own admission:

    Except that the people who sign the contracts are rarely the people who implement these systems.

    So it is too expensive to put in place; it isn’t a prevention mechanism. Your proposed solutions won’t work either because they are too expensive.
    Adam is correct: we have no good advice to give, so we cannot (or should not) pass laws to allocate liability according to our “good advice.” And this won’t change until the market finds some better solutions.

  3. Adam – you say you “don’t buy it” as if what Cook said is false, but then provide a reason that doesn’t match, i.e. that you think this is a bad thing to tell the bad guys.
    So, do you “not buy it” because you believe Cook was wrong or because you don’t want the bad guys tipped off about something they already know? And if you believe Cook was wrong, do you have a couple of scenarios to work through that have to do with, say, the number of likely perpetrators, the distribution of the cc info, available time to exploit of the cc, and the like? Or, do you have evidence that abuse of cc’s in these cases are severely under-reported to the tune of (presumably) thousands and/or millions not realizing it?
    Pete

  4. Ian,
    At design time, it is as cheap to do something securely as it is do it insecurely. If you add the costs associated with a breach like TJX’s, it’s in fact, a lot cheaper to do it securely. If a business takes our advice (and that of PCI’s stuff, for example), then it’s an opportunity cost NOT to be secure.
    If you are plumbing a house, you need to know what your local standards and code is. This stops the house from being destroyed in a preventable flood.
    There is no difference in my mind between an unlicensed plumber who does not know (or care) about standards and “code” than an unqualified business / architect / designers and lots of devs who know how to process a transaction, but not how to obey the simplest of instructions in our “code” – the PCI DSS. It’s not hidden, it’s not hard to do. It’s not expensive to comply, particularly if you take the simplest path and NOT store credit card details like it asks. In fact, PCI DSS compliance if you don’t store customer / cc details is extremely cheap – cheaper than doing it the wrong way.
    I personally think that they are the right folks to own the blame and thus the costs and be liable for their stupidity in not asking two basic questions:
    a) What are the minimum standards I have to comply with?
    b) Do I know how to do this?
    If answers to a or b are “I don’t know”, it’s pure negligence to not get someone in who does know what to do.
    Andrew

  5. Pete,
    We have little good information about the size of the networks, or how stolen data is exploited. So what I don’t blame is Mike’s claim that we want breaches to be larger, because the odds of us being exploited are lower. I don’t buy it on a current basis, and I don’t buy it on a “careful what you wish for” basis.
    Andrew,
    I don’t agree that it is “it is as cheap to do something securely as it is do it insecurely.” I agree that it’s cheaper to do it right the first time, but doing things securely requires experts, and giving your experts time to be involved in the decisions to make the right ones. If the answer to “b” “how do I know to do this?” is “go hire an expert,” then you need to invest in expertise. It may be cheaper to do so than not, but making that call itself requires knowing how to evaluate an expert–should I just go hire a CISSP or IBM?
    Adam

  6. Adam,
    CISSPs are a waste of time and money. In credit card space, PCI Security Standards has defined what they consider an expert, a Qualified Security Assessor. I have problems with this program as it allows non-programmers to audit something outside their area of expertise, but as they set the standards, that is the gold standard … today.
    https://www.pcisecuritystandards.org/resources/qualified_security_assessors.htm
    I am working with … a bunch of nice folks … on a major forthcoming project which will address the lack of qualified web app auditors. Once this is publicly announced, I will point to it on my blog.
    Adam, we’re going to have to disagree with costing of
    a) architecting a secure solution from scratch with associated verification
    vs
    b) writing an insecure app, wait for problem, pay for auditor costs, pay PR spin doctors, pay legal fees, pay class action outcomes (such as will happen with Card Systems), pay … pay … pay.
    a) is far far cheaper. On a typical card processing app of say 100,000 to 250,000 lines of code, you’re looking at say two weeks to a month of security architecture, 1 week to 2 weeks of secure architecture review, 2 to 4 weeks on code review (depends on depth), and 1 week of penetration testing.
    So call it a quarter man year in security architecture / verification, costing approximately $50-100k depending on how you fund it. That 100,000 to 250,000 line processing app will take at least 10-50 programmers at least a year to write. These systems are not simple; they are very specialized. Often they process billions of dollars of transactions a year, more if it’s a common cc gateway product. They can afford to get it right, and more to the point, they must get it right.
    In no other field can unqualified folks do their “thing” and get away with substandard non-compliant rubbish and face nil consequences stemming from their negligence and lack of skill. We as software engineers must enforce this or we will ALWAYS lose. It’s time to make a stand.
    All of you doubters need to meet someone who has to check their identity and credit history every day, and fax a list of fraudulent transactions to their financial institution(s), deal with new loans and attempted fraud, carry notes when they travel to say that their ID has been compromised and they are the REAL John K. Smith, and the hassle of getting credit when they want it. Often, being the victim of identity theft means a life long fight against bureaucracy. Multiply this by hundreds of thousands to millions of individuals every year. It is not life. These folks should not have to bear the cost of someone else’ negligence and live in fear of sheriffs coming to the door to repossess cars and items for which do not exist.
    Such negligence should be illegal, and the cost should be borne by those who demonstrated negligence by not taking the necessary care doing what they are paid to do properly. It is an overhead of writing this type of software.
    The costs in comparison are miniscule. If you’re a mom and pop shop, you should buy your transaction processing through your bank. It’s cheap and secure. If you a TJX, you can afford qualified security architects and QSAs.
    Andrew

  7. Andrew,
    Earlier you claimed “At design time, it is as cheap to do something securely as it is do it insecurely.”
    I disagreed. Now you’ve moved to a TCO argument, which I somewhat buy. But that cost-effectiveness isn’t obvious.
    Further, put yourselves in the shoes of a CEO. He searches on CISSP and finds your comment. As to the value of a PCI QSA, see Ed Moyle’s comments on Security Curve. How does he evaluate security claims? It all gets very expensive very quickly. The payoff may not be there (the externalities are.)
    I don’t think this isn’t worth doing–I do think that it’s hard to make the economic case, and I think that a new law passed today would likely be another Sarbox-expensive and difficult to comply with, and of questionable security value.
    I have no idea why you think I’m unsympathetic to those who have been victimized by identity fraud. But deep identity fraud is orthogonal to PCI and how CC#s are stored.

  8. The optimal investment, given a comatose PCI watchdog (regardless of whether it has teeth) and most damage being external to the retailer, is darn close to zero, it seems.
    Why not force the costs to be internalized? Because retailers are insufficiently informed to figure out how to cut costs? A little market discipline can be mighty educational.

  9. @Andrew –
    Your two choices, a and b, are not mutually exclusive, right? In fact, with so much hindsight bias in this area, there is more a reverse cause/effect phenomenon – if b happens, then you must not have done a.
    Oh, and don’t forget to discount b by the probability that you will be compromised, which is actually very low from what I can tell. Remember, this is decisonmaking for future possibilities, not past ones.
    (Btw, loved the whole heartstrings portion of your comment, especially since it isn’t directly related, as Adam points out.)
    Pete

  10. Andrew:
    I disagree. But I have been where you are … and it took me a long time to work out why I disagree. Basically, we tried that and it didn’t work. For many reasons; but let’s pick some.

    1. your theory only works if, when you build the system, you know how the attacker will attack it. Basic problem of time-folding here.
    2. Plumbing: last I checked, water has followed the laws of gravity going back to the time plumbum was coined. Fraud however flows uphill, sideways, round in little spirals and up ones nose … as needed.
    3. How on earth are you going to convince the world that *your standard* will solve the problem when the last 100 have not? You might be right … but …
    4. Are you saying that the authors of the PCI DSS are ready to take on the liability for any unfortunate consequences? If not, what is your basis for saying that others take on this liability?
    5. If you are saying it is pure negligence to not get the right guys in to fix this, does that mean you’ll support us in suing the authors of various security standards back into the stone age?

    Sorry, rushed reply, I never realised anyone was interested 🙂

Comments are closed.