Quantum Progress

quantum-computers.jpg

What is it about the word “quantum” that sucks the brains out of otherwise reasonable people? There has to be some sort of Heisenberg-Schödinger Credulity Principle that makes all the ideons in their brains go spin-up at the same time, and I’m quite sure that the Many Worlds Interpretation of it has the most merit. (In case you’re a QM n00b, the ideon is the quantum unit of belief.) Fortunately, there seems to be some sanity coming to reporting about quantum computing.

Just about every quantum computing article has a part in it that notes that there are quantum algorithms to break public crypto. The articles breathlessly explain that this means that SSL will be broken and the entire financial world will be in ruins, followed by the collapse of civilization as we know it. Otherwise sensible people focus on this because there’s very little to sink your teeth into in quantum computing otherwise. Even certified experts know that they don’t know what they don’t know.

Scott Aaronson has a good article in Scientific American called “The Limits of Quantum Computers” (only the preview is free, sorry) that gives a good description of what quantum computers can’t do. I’m pleased to see this. SciAm has been a HSCP-induced quantum cheerleader over the last few years.

I have been doing some research on the claims of quantum computing. I decided to pick the specific factoring ability of quantum computers, and produce some actual numbers about how we might expect quantum computing to develop. In other words, I’m going to be a party pooper.

The crypto-obviating algorithms in question are Shor’s algorithm for factoring and an algorithm he developed for discrete logs. I was surprised to learn that Shor’s algorithm requires 72k3 quantum gates to be able to factor a number k bits long. Cubed is a somewhat high power. So I decided to look at a 4096-bit RSA key, which is the largest that most current software supports — the crypto experts all say that if you want something stronger, you should shift to elliptic curve, and the US government is pushing this, too, with their “Suite B” algorithms.

To factor a 4096-bit number, you need 72*40963 or 4,947,802,324,992 quantum gates. Lets just round that up to an even 5 trillion. Five trillion is a big number. We’re only now getting to the point that we can put about that many normal bits on a disk drive. The first thing this tells me is that we aren’t going to wake up one day and find out that someone’s put that many q-gates on something you can buy from Fry’s from a white-box Taiwanese special.

A complication in my calculations is the relationship between quantum gates and quantum bits. For small numbers of qubits, you get about 200 qugates per qubit. But qubits are rum beasts. There are several major technologies that people are trying to tease qubits out of. There’s the adiabatic techlogies that D-Wave is trying. There are photon dots, and who knows how many semiconductor-based methods.

It isn’t clear that any of these have any legs. Read Scott Aaronson’s harumphing at D-Wave, more pointed yet sympathetic faint praise and these educated doubts on photonics. Interestingly, Aaronson says that adiabatic quantum computers like D-Wave need k11 gates rather than k3 gates, which pretty much knocks them out of viability at all, if that’s so.

But let’s just assume that they all work as advertised, today. My next observation is that probably looking at billions of q-bits to be able to get trillions of q-gates. My questions to people who know about the relationship between quantum gates and quantum bits yielded that the real experts don’t have a good answer, but that 200:1 ratio is more likely to go down than up. Intel’s two-billion transistor “Tukwila” chip comes out this year. Five trillion is a big number. We are as likely to need 25 billion qbits to factor that number as any other good guess. Wow.

The factoring that has been done on today’s quantum computers is of a four-bit number, 15. If you pay attention to quantum computing articles, you’ll note they always factor 15. There’s a reason for this. It’s of the form (2n-1) * ( 2n+1). In binary, 2n-1 is a string of all 1 bits. A number that is 2n+1 is a 1 bit followed by a string of 0s, and then a 1 again. These numbers are a special form that is easy to factor, and in the real world not going to occur in a public key.

This is not a criticism, it’s an observation. You have to walk before you can run, and you have to factor special forms before you can factor the general case. Having observed that, we’ll just ignore it and assume we can factor any four-bit number today.

Let’s presume that quantum computers advance in some exponential curve that resembles Moore’s Law. That is to say that there is going to be a doubling of quantum gates periodically, and we’ll call that period a “generation.” Moore’s specific observation about transistors had a generation every eighteen months.

The difference between factoring four bits and factoring 4096 bits is 30 generations. In other words, 72*43 * 230 = 72*40963. If we look at a generation of eighteen months, then quantum computers will be able to factor a 4096-bit number in 45 years, or on the Ides of March, 2053.

This means to me that my copy of PGP is still going to be safe to use for a while yet. Maybe I oughta get rid of the key I’ve been using for the last few years, but I knew that. I’m not stupid, merely lazy.

I went over to a site that will tell you how long a key you need to use, http://www.keylength.com/. Keylength.com uses estimates made by serious cryptographers for the life of keys. They make some reasonable assumptions and perhaps one slightly-unreasonable assumption: that Moore’s Law will continue indefinitely. If we check there for how long a 4096-bit key will be good for, the conservative estimate is (drum roll, please) — the year 2060.

I’m still struck by how close those dates are. It suggests to me that if quantum computers continue at a rate that semiconductors do, they’ll do little more than continue the pace of technological advancement we’ve seen for the past handful of decades. That’s no mean feat — in 2053, I doubt we’re going to see Intel trumpeting its 45 picometer process (which is what we should see after 30 generations).

I spoke to one of my cryptographer friends and outlined this argument to him. He said that he thinks that the pace of advancement will pick up and be faster than a generation every eighteen months. Sure. I understand that, myself. The pace of advancement in storage has been a generation every year, and in flash memory it’s closer to every nine months. It’s perfectly conceivable that quantum computing will see horrible progress for the next decade and then whoosh off with a generation ever six months. That would compress my 45 years into 25, which is a huge improvement but still no reason to go begging ECRYPT for more conferences.

On the other hand, it’s just as conceivable that quantum computing will end up on the Island of Misfit Technologies, along with flying cars, personal jetpacks, Moon colonies, artificial intelligence, and identity management.

But I also talked to a bigwig in Quantum Information Theory (that’s quantum computing and more) and gave him a sketch of my argument. I heard him speak about Quantum Information and he gave the usual Oooooo Scary Quantum Computers Are Going to Factor Numbers Which Will Cause The Collapse of All Financial Markets And Then We Will All DIEEEEE — So That’s Why We Need More Research Money boosterism.

He wouldn’t let me attribute anything to him, which I understand completely. We live in a world in which partisanship is necessary and if he were seen putting down the pompoms, he’d be fired. Telling middle-aged technocrats that the math says their grandkids are going to see quantum computers shortly before they retire will cause the research money dry up, and if that happens then — well, the world won’t end. And then where would we be?

Nonetheless, he said to me sotto voce, “There’s nothing wrong with your math.”

Photo is a detail from “Shop Full” by garryw16.

Are We Measuring the Right Things?

measuring-progress-gao-8496t.jpg

One of the reasons that airline passengers sit on the tarmac for hours before takeoff is how the FAA Department of Transportation measures “on time departures.” The on time departure is measured by push-back from the gate, not wheels leaving the tarmac. (Airlines argue that the former is in their control.) If you measure the wrong things, you create incentives for bizarre behavior.

Which is why I was fascinated to read the new GAO report, “Information Security: Although Progress Reported, Federal Agencies Need to Resolve Significant Deficiencies.

While progress may be reported, PogoWasRight calls out:

The number of security breaches on government computers has quadrupled in the last 2 years – from just over 3,500 in fiscal 2005 to just over 13,000 in fiscal 2007.

If that’s progress, maybe we need some regression?

More seriously, I think it’s great progress that we are talking about the failure rates. Now we need to start to question the things being measured that allow the GAO to summarize that state of affairs as progress.

I wonder, where else are we measuring the wrong things?

[Update: I was measuring the wrong agency.]

Measuring the Wrong Stuff

There’s a great deal of discussion out there about security metrics. There’s a belief that better measurement will improve things. And while I don’t disagree, there are substantial risks from measuring the wrong things:

Because the grades are based largely on improvement, not simply meeting state standards, some high-performing schools received low grades. The Clove Valley School in Staten Island, for instance, received an F, although 86.5 percent of the students at the school met state standards in reading on the 2007 tests.

On the opposite end of the spectrum, some schools that had a small number of students reaching state standards on tests received grades that any child would be thrilled to take home. At the East Village Community School, for example, 60 percent of the students met state standards in reading, but the school received an A, largely because of the improvement it showed over 2006, when 46.3 percent of its students met state standards. (The New York Times, “50 Public Schools Fail Under New Rating System

Get that? The school that flunked has more students meeting state standards than the school that got an A.

There’s two important takeaways. First, if you’re reading “scorecards” from somewhere, make sure you understand the nitty gritty details. Second, if you’re designing metrics, consider what perverse incentives and results you may be getting. For example, if I were a school principal today, every other year I’d forbid teachers from mentioning the test. That year’s students would do awfully, and then I’d have an easy time improving next year.

Defending Metrics

Yesterday, I attacked metrics claiming that the way they are being used today, they were useless to upper management and didn’t relate the value of the InfoSec team to the business. While I stand behind that claim, also believe that a lot of metrics being performed today are very useful to technical management especially those with operation responsibilities. With that in mind, I’d like to point our readers to a newish blog, Security Retentive by Andy Steingruebl. Andy and I worked together way back when and I can’t say enough nice things about him. On Sunday, Andy talked about building effective metrics. In this case, he talked about vulnerability management though he promises to cover anti-virs software and software security in later posts. I for one will be on the lookout for the follow-ups. Andy covers a good strategy for launching and measuring a vulnerability management program. I don’t want to steal his thunder, so go read what he has to say.

Attacking Metrics

Last week I had the pleasure of having lunch with Alex Hutton from RMI and we got to talking about metrics. Specifically, we talked about how most metrics that we security folks come up with are well boring are effectively useless to upper management. At best they are focused on technical management such as the CIO and CSO. Like much of the rest of our industry, we metrics folks have again failed to relate our services to the business at large. Yesterday, Alex posted a great article on the sad state of metrics in our industry. I claim no credit what so ever for any of Alex’s content (his thoughts here go far deeper than anything we covered over bowls of Pho), I heartily encourage you all to read what he has to say as he covers far more ground than what I’ve hinted at above.