The Case Against Quantum Computing
June 17, 2019 11:22 AM   Subscribe

 
The most optimistic experts estimate it will take 5 to 10 years. More cautious ones predict 20 to 30 years. (Similar predictions have been voiced, by the way, for the last 20 years.) I belong to a tiny minority that answers, “Not in the foreseeable future.”

Stop me if you’ve heard this one...
posted by TedW at 11:39 AM on June 17, 2019 [1 favorite]






On the bright side, by the time quantum computing works the enormous amount of energy needed for the cryogenics will be cheaply and cleanly supplied by fusion reactors.

I've watched quantum computing develop since the turn of the century, and I don't expect anything exciting to happen for around the same period again - if then.
posted by Devonian at 11:56 AM on June 17, 2019 [9 favorites]


On the one hand we have fusion energy, which appears to still be 20 years out and holding, on the other we have AI which has finally taken off. So where are we here? No idea, but on that very page there was an ad for a "free quantum cloud service," which besides being a perfect draw in buzzword bingo, might signal that progress has been made. Whether the D-Wave processor it employs exhibits any quantum computing effects remains controversial.
posted by sjswitzer at 12:00 PM on June 17, 2019 [2 favorites]


on the other we have AI which has finally taken off

Has it, though? What's taken off is Machine Learning applied to relatively narrow and well-constrained problems. Which is very useful, and very cool, but I haven't seen any indication that this field is building towards a general AI.
posted by tobascodagama at 12:14 PM on June 17, 2019 [15 favorites]


sjswitzer: I'm still skeptical about how much AI has "taken off". We're good at training computers to evaluate and make decisions on data sets, but all-too-often in a way that just reaffirms the biases of the people creating the training data sets, the people creating the algorithms, or both. I'm with Maciej Ceglowski when he says "Machine learning is like money laundering for bias". The problem is that in these limited domains, AI isn't as good as real humans, it's just faster at making the same mistakes—and sometimes creating new ones. For example, image recognition AIs are easily fooled by unexpected input, by simple image rotation, or even a sticker.
posted by SansPoint at 12:15 PM on June 17, 2019 [17 favorites]


Quantum computing will arrive when some marketing team decides to gussy up something as "quantum computing", the same way we got hoverboards and AI.
posted by oulipian at 12:15 PM on June 17, 2019 [6 favorites]


Sidenote - The second illustration (direct link) is a reference to the Flammarian engraving.
posted by azlondon at 12:16 PM on June 17, 2019 [6 favorites]


From Zeek321's link - Scott Aaronson's response

"You can see my 2013 responses to Dyakonov here. Is there any new argument in this latest piece, which would require a new response?

He’s right that the applications of QC have often been grotesquely overhyped in the press (but any reader of this blog surely knew that). He’s right that building a scalable QC is unbelievably hard. He’s wrong in almost everything he says about fault-tolerance, with the most egregious howler being that no one has any idea how to account for imprecision in preparing the initial states and applying the gates. He apparently still doesn’t understand that the fault-tolerance theorem handles such imprecisions in exactly the same way it handles environmental decoherence—it all gets folded into the same error budget—and the implication that no one in QC would’ve noticed that gates are imperfect is laughable.

(Also, he’s badly out of date when he says the best current QC experiments are with 5 qubits: has he not heard that IBM has a 20-qubit device currently available for public use? That Google, IonQ, and others have likewise gotten good results with 20+ qubit experiments? What about the 50+ qubit experiments of Misha Lukin and others with optical lattices?)

Truthfully, though, in the years before quantum computational supremacy first gets convincingly demonstrated, I’m happy to have as many Dyakonovs as possible confidently predicting that it’s impossible. What I worry about more right now, is all the people who will say it was completely obvious all along that quantum supremacy was possible, it’s just 1926 QM, so what was the point of doing it? 🙂"


Edit: Scott replies several more times in the blog comments if you want a deeper read.
posted by matrixclown at 12:27 PM on June 17, 2019 [7 favorites]


Well anyway, AI has ruined chess and go for mere humans.
posted by sjswitzer at 12:28 PM on June 17, 2019 [1 favorite]


It will be funny when they achieve "quantum supremacy" right as we run out of helium.
posted by sjswitzer at 12:33 PM on June 17, 2019 [3 favorites]


On the one hand we have fusion energy, which appears to still be 20 years out and holding, on the other we have AI which has finally taken off.

I'm not sure I'd agree with this entirely. I think AI has "taken off" in the sense that the term has been redefined to include a lot of stuff that wasn't really considered AI the last time around. Nobody is really trying to do AI the way people were trying to do it in the 80s, shooting for general AI via expert systems and stuff. That's all considered pretty naive and hopelessly optimistic now. There was a sort of pivot / rebranding / scope decrease, which the popular press seemingly didn't notice or care about in its excitement.

Quantum computing is a more specific term, such that I think it might be harder to rebrand, but it's possible that it'll suddenly "take off" at some point when some cunning manager somewhere decides to call their new tunneling-transistor-based processor (any transistor below a certain size has to account for and use quantum tunneling, so this is bound to occur and is being worked towards) a "quantum computer". Is it quantum? Yes. Is it a computer? Yes. Is it a "quantum computer", in the sense that we currently imagine them? No, but I CAN'T HEAR YOU, just like the people who are still tilting at windmills that most of the AI stuff going on today really isn't that new and NOBODY CARES BECAUSE SHUT UP AND TAKE MY VC MONEY. That's how it goes.

My guess? We'll go through a few more hype cycles and then someone will figure out how to slam the term onto whatever technology exists, everyone will claim it's a huge breakthrough, a bunch of rich people will get inordinately richer, and progress will continue to grind on slowly and steadily, as it tends to do.
posted by Kadin2048 at 12:53 PM on June 17, 2019 [10 favorites]


Well anyway, AI has ruined chess and go for mere humans.

A lot of chess players are very excited by the prospect of learning from the games of AlphaZero; the takeaway seems to be, so far, that exciting and creative methods of play that grandmasters have neglected are possible.
posted by thelonius at 1:02 PM on June 17, 2019 [8 favorites]


I thought to add that caveat, thelonius; you have put it better than I would have. Other caveats also noted! I mentioned it only to point out that a once-fallow research field eventually became fruitful and continues to be so (for instance, in natural language translation... but also not perfect!).

Anyway, the AI thing threatens to be a derail. :(
posted by sjswitzer at 1:15 PM on June 17, 2019


I dunno, I think AI is a relevant comparison in some ways. People I know who work in the "AI" field frequently make jokes along the lines of "as soon as it works it's not AI anymore." Which says a couple things - one, that hard problems look easy once they've been solved, and two, that the broader cultural idea of "AI" is something bigger and more abstract than the specific problem-solving most AI-as-a-domain-of-CS researchers are directly concerned with.

I don't think the situation with quantum computing is exactly the same - the phrase probably means a lot less to the true layperson, other than "some kind of complex and powerful computer," and as far as I know it's not like we're seeing quantum computing knocking down impressive real-world problems like machine learning has - but I could see someone like Aaronson having a similar feeling.
posted by atoxyl at 1:34 PM on June 17, 2019 [2 favorites]


I may be misunderstanding something, but the argument about how many parameters are in play seems disingenuous. Although it's true that the values of 2^N continuous functions as opposed to the values of N classical transistors makes for a much more complicated notion of the state of the system, but if we approached classical computing as a matter of directly constructing valid transitions between states from the N-dimensional space of transistor values then it, too, would be practically intractable. Luckily, we abstract transistor state behind higher-level functions that both save us from having to consider that state directly and that rule out vast swaths of the configuration space from being reachable in a transition from a known-valid state. I don't see why that wouldn't be true for quantum computing, too.
posted by invitapriore at 1:42 PM on June 17, 2019 [5 favorites]


The proposed strategy relies on manipulating with high precision an unimaginably huge number of variables

So what? I mean, I don't have much faith that QC will be especially useful in the near future, but the above objection is an objection to everything anyone has ever done.
posted by thatwhichfalls at 2:11 PM on June 17, 2019 [2 favorites]


Quantum computing is simultaneously imminent and impossible until you try to measure it.
posted by It's Raining Florence Henderson at 2:21 PM on June 17, 2019 [3 favorites]


As far as I understand it, the whole point of quantum computing is that you have to consider the entire set of qubits as a multivariant space, otherwise it doesn't work. Classical computing builds up nicely through many levels of abstraction, from the individual switching device through gates to higher-level functional blocks to the complete von Neumann (or vN-esque, if you want to quibble) architecture. Along the way, people like Shannon showed how Boole and de Morgan could provide the tools, and Turing et al said the journey was worthwhile.

Among many other implications of this, it meant that as soon as integrated circuits became possible, the combination of abstraction layer isolation and Moore's Law meant that useful and profitable advances into commercial computation could be made piecemeal. The question around the 4004 wasn't "how far does this get us to the billion-transistor Xeon?" but "Can we sell enough of these things into calculators and traffic lights to make it worthwhile?" (a question that had no clear answer until they tried it). But by then, other approaches to highly integrated digital computation were advancing - Intel stumbled into the right niche at the right time; it wasn't the lightning flash of genius that ignited a whole industry by itself.

QC has nothing like this. It has the potential to do a small but very interesting set of tasks far better than classical computing can or will, but has enormous engineering challenges, many of which have no clear solution. There is not, nor do I think there can be, a QC equivalent of the 4004, let alone the Z80 or 8088. It's as if we had to engineer a (where are we now?) 28-core Xeon from the starting point of six germanium transistors handwided on a tag strip as a flip-flop with no intermediate useful steps - and when we get there, it won't even run Tetris.

It could happen, Lots of people and lots of dollars want it to happen. I don't see the pathway to making it happen, though, not without some fundamental discoveries in process or protocol.
posted by Devonian at 2:28 PM on June 17, 2019 [7 favorites]


On the one hand: "How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits" - as other posters have said, all the problems of noise are factored into a single "characteristic physical gate error rate" and are well understood by researchers.

On the other, suppose we do finally hit a sustained Moore's Law style growth in quantum computing gate counts, where the number of gates doubles every 18 months. We're at ~100 (a little less, I think), so around 18 doublings are needed before we reach the required 20 million qbits. That would lead to a situation where, *literally 27 years from now* that quantum computer becomes available, and then we can break your bank website's encryption in another 8 hours after that. This is actually fairly comforting, because it means that even in the optimistic case we have a few years to settle on "post-quantum crypto" schemes to keep our online life safe.

Is there a way to make a time-space trade-off, so that e.g., you only need 10 million quantum gates but take a day instead of 8 hours? You can often do this with classical computing, which is why only non-CS types balk at the idea that your turing machine's tape is infinite. It feels like this is a big NO for QC -- You have to be able to fit the whole problem into the quantum gates at one time. otherwise, we'd use our 50-gate quantum computers to serially simulate the 10-million ones; and similarly, we'd use our digital computers to simulate 50-gate quantum computers that serially simulate the huge ones, and get the speed-up of quantum search using the devices we already have. If someone has a reference that spells this out, I'd love to read it.
posted by the antecedent of that pronoun at 2:31 PM on June 17, 2019 [3 favorites]


The second illustration (direct link) is a reference to the Flammarian engraving.

Wow! Nice catch, and what a brilliant reimagining of it! What are the things the viewer's looking at, though - hydrogen molecules with the protons depicted as triplets of quarks?
posted by Joe in Australia at 6:31 PM on June 17, 2019 [1 favorite]


They look like electron orbitals.
posted by thatwhichfalls at 7:52 PM on June 17, 2019 [2 favorites]


This is a weird article, something I'd expect to see in the personal blog of a retired petroleum engineer not the IEEE Spectrum. I suppose they ran it due to Dyakonov's reputation and to "generate discussion"? Which it achieved to judge from the editor's note that he had to change "you didn't consider this" to "I'm not satisfied you considered it enough" after many people wrote in to say "here's all the places where we considered that".

It's been several years since I was especially clued-in on the state of quantum computing, but even from the little I know there's quite a bit in this article that jumps out as being contrarian in the most negative sense of the word. It does seem like he's trying to scare everyone with large numbers, especially that bit about how extraordinarily large the number of continuous variables describing the state would be—which from my understanding is completely irrelevant.

QC has nothing like this. That's not really true, if anything the hype around QC has lead people to dive happily into the world of designing interesting abstractions without concern for the reality of exactly how many qubits we can physically use in the present day. So we have the quantum gates (that often do represent something physically realizable) that only interact with one or two or a few qubits, that can be assembled modularly into quantum circuits, which can themselves be assembled modularly into larger quantum circuits. There's no end of quantum programming languages that abstract that further into something more familiar as a programming language, even if they do mostly only get run against emulators that do have to keep track of all those zillions of states. The programming languages usually have a mixed classical/quantum semantics, oriented for the predicted future where quantum computers are some kind of coprocessor linked with a conventional computer architecture and used to handle certain steps of a computation where it's suited.

So yes people absolutely are planning on there being a QC equivalent of the 4004, only it'd be a QC equivalent of the 8231.

That model of mixed quantum/classical computation is also one way of getting that time-space tradeoff, the antecedent of that pronoun: if there is a way to design an algorithm that can usefully partition the data, divide and conquer style, you can use the classical CPU in a big for loop to run 1000 iterations through the 1000-qubit coprocessor and get something like the equivalent of a 10,000-qubit computation. (Caveat is that "something like" is pulling a lot of weight, as is that "if there is a way".) Although I think your analogy with classing Turing machines is rotated 90°—the number of qubits is more like the number of cells on the tape, the program is made up of the pattern of which gates to apply to which qubits, which is more about the physical limitations of the interferometer or microwave signal generator or whatever the gates are implemented by.
posted by traveler_ at 10:05 PM on June 17, 2019 [2 favorites]


At least we still have the Cannae drive.
posted by um at 10:48 PM on June 17, 2019


I think, Traveler_, that our posts are not mutually exclusive.

I'm reminded of the somewhat ironically parallel case of FPGA supercomputing. There is an unanswerable argument (in classical computing) that while anything you can do in software you can do in hardware, and vice-versa, the hardware option always has the potential of being the faster of the two.

Thus FGPAs, which are vast seas of hardware that can be reconfigured at will, would seem to be the best of both worlds - you compose your problem as a hardware design that can be sent like software into the guts of your FPGA supercomputer and win big. FGPAs don't offer the sort of mind-warping speed-ups that QCs do, but have the advantage of actually existing in a form you can order online. The engineering has been done.

And... it hasn't happened. Turns out that defining your problem in hardware to get that speed-up is both highly contingent on your problem being amenable to that approach and on being an extremely clever FPGA whisperer. There aren't enough of either to make the idea fly, at least not at significant scale. As soon as you start to build software tools to massage your problem automatically into an FPGA format, you lose out to high performance general purpose standard architecture CPUs. So guess where the money and brains go when they need a problem sorted out?

It could be that a classical/QC mix co-processor will actually do something useful: in fact, if QC is to do anything useful in any sane timeframe, it'll have to be something like that. But are there concrete proposals with concrete applications and a non-handwavey roadmap? There's no market today for general purpose maths co-pro lineal descendents of the 8231 - the 4004's spawn has eaten them all - unless you count GPUs. They're proof that if there is a worthwhile niche, then hardware that does that niche very well at the expense of being generally applicable will fill it quickly. I don't know what niches exist that 3-5 year predictable QC will find sustainable.

It's instructive to see how QC is used in nature - fabulous one-off niche applications, but never ever in pattern processing.
posted by Devonian at 3:57 AM on June 18, 2019 [1 favorite]


Optical lattice/ion trap quantum computers have a really good error budget and are really scalable. I'm convinced that in ten years, those, at least, will be "good enough" for a few small applications. IonQ/Chris Monroe have a really intelligent business pitch to give about this.

Plus who knows when a condensed matter theory breakthrough finally makes solid state quantum computing less error-prone than it is currently.

Easy breaking of RSA is probably very far off. It just needs too many bits. But quantum simulation and some optimisation algorithms seem to be already practical, or close to practical, with order 100 qubits. And that is very close to present day.

The author complains that infinite coherence time seems super hard. And that's ok-- even coherence time of 10 seconds or so still gives plenty of room for interesting, practical, computation.
posted by sidek at 4:58 AM on June 18, 2019


I think you're right, Devonian, I think time is going to be the one to have to tell whether there's any kind of actual benefit that ever emerges enough for QC to have a sustainable role in the computing landscape, and the analogy with FPGA devices is apt—I do remember all the hype around dynamically configurable circuits there for a while. But I guess when I was talking about quantum coprocessing I was thinking of a broader class of possibilities than just the 8231 analogy. It could be that QC finds its niche in just a few instructions on an otherwise-conventional-seeming cpu. Or maybe it remains a large exotic cabinet in the corner of a datacenter, representing 0.01% of the total computing power in the building and getting 0.01% of the tasks, but those cloud customers really like having it to run their tasks. I do wonder if they ever start seeing practical use, if people will be arguing whether what they're used for counts as "computation" at all.
posted by traveler_ at 9:33 PM on June 18, 2019 [1 favorite]


« Older Local production, reliability, easy repair, and...   |   Ugliness is a gatekeeper to being worthy of love Newer »


This thread has been archived and is closed to new comments