7

Leonid Levin said, "Exponential summations used in QC require hundreds if not millions of decimal places accuracy. I wonder who would expect any physical theory to make sense in this realm." See https://groups.google.com/forum/m/#!msg/sci.physics.research/GE5cz3xefCc/e0eh34MZGdwJ

Given that no machine has ever been designed to be sensitive to physical quantities to hundreds of digits of accuracy, how will quantum computing ever be possible in the real world?

To explain what I mean, in a QC model, the state vector has exponential size dimension in which the squares of the entries add to one. So if all of the entries are equal, when they are rounded to say the billionth digit, they will all be zero on a 100 qubit machine, contradicting the fact that they all must add to one. This is a big problem.

EDIT: I think the question is this: How can we possibly perform sensible measurements on quantum computers, given their extreme sensitivity?

Kyle Kanos
  • 28,229
  • 41
  • 68
  • 131

3 Answers3

14

If you believe the fault-tolerant threshold theorem for quantum computers, you do not require hundreds of digits of accuracy.

Levin does not believe this theorem. More precisely, he believes that the hypotheses required for the theorem to work do not apply to the actual universe.

I believe his mental model of quantum mechanics resembles the idea that the physics of the universe is being simulated on a classical machine which has floating point errors. I don't believe this is true.

Peter Shor
  • 11,274
  • Ok, now I see the point of disagreement. – Craig Feinstein Mar 30 '14 at 15:07
  • But what about the fact that QM is not exactly correct and has to be corrected by QED, which is only known to be only valid to 10 or so decimals? – Craig Feinstein Mar 30 '14 at 18:12
  • 2
    The real question is whether the rules of the universe are exact unitary evolution or something else. If they're exact unitary evolution and you have locality of action (quantum field theories, including QED, satisfy these) then the fault-tolerant threshold theorem holds. If the universe has extra levels of weirdness under the quantum field theory, then it's not clear the hypotheses are satisfied. – Peter Shor Mar 30 '14 at 19:09
  • Where is the evidence that the unitary evolution is exactly unitary? The experiments that have been done to confirm QM could be interpretted as having satisfied approximate unitary evolution (to say 10 decimal places) as well as exact unitary evolution. – Craig Feinstein Mar 30 '14 at 20:22
  • Beyond a certain number of decimal places, we don't know whether the evolution is unitary or not. If it's not, the hypotheses of our fault-tolerant threshold theorems are certainly not satisfied. But this doesn't mean the fault-tolerance protocols wouldn't work. I don't believe anybody has a concrete proposal for a kind of non-unitarity which would have to cause quantum computing to fail, and not cause any other observable changes in physics. – Peter Shor Mar 31 '14 at 21:18
  • 1
    But if QM is unitary only up to say 10 digits, how could QC work with your factoring algorithm, which requires computing the $2^n$-th root of unity in the Fourier Transform part, and the state-vector has dimension the order of $2^n$? Ten digits of accuracy is too coarse for your algorithm to work. – Craig Feinstein Apr 01 '14 at 13:18
  • The factoring algorithm doesn't require the $2^n$-th root of unity. If you leave all the tiny phases out in the Fourier transform, the resulting transform is close enough that the factoring algorithm works fine. If you have 10 digits of accuracy in the unitary transformations for my algorithm, it will work fine, as long as you take fewer than $10^{10}$ steps. On the other hand, if you drop the unitarity condition on physics, it's not at all clear what happens. Presumably probabilities still add up to exactly 1; this is usually ensured by unitarity, so you need something to take its place. – Peter Shor Apr 01 '14 at 15:30
  • But what if you run your algorithm to 10 digits of accuracy on a 200-qubit machine where all of the coefficients in the state-vector are initialized to $1/2^{100}$? Then you will get zero for the state-vector in the end. This is what I was talking about in my question. This seems to me to be a big problem. – Craig Feinstein Apr 01 '14 at 16:19
  • You're operating under the same assumption as Leonid Levin: that the universe acts like a simulation of a quantum system on a classical computer which uses floating-point arithmetic. If this were true, probabilities wouldn't be guaranteed to add up to 1. I think we would have noticed. Physics shouldn't depend on which coordinates you pick to represent the quantum state vector. – Peter Shor Apr 01 '14 at 16:24
  • There are two possibilities: 1) physics depends on the coordinates. 2) Physics doesn't depend on the coordinates. Certainly, all of the experiments of modern physics show that possibility 2) holds. But at the same time, all of the experiments of modern physics have only been shown to be true to around 10 decimal places. So it seems to me that assuming that possibility 2) is true for more than 10 decimal places is a hasty generalization. – Craig Feinstein Apr 01 '14 at 16:39
  • If physics depends on the coordinates of the quantum state vector, it's not well-defined unless it also has some recipe for choosing which coordinates you should work in. – Peter Shor Apr 02 '14 at 14:35
  • So the heart of this debate seems to be whether our universe is discrete or continuous. – Craig Feinstein Apr 02 '14 at 15:17
4

Craig, I think you're confusing two important things. First, your original question was something along these lines: Given the fact that we can expand a state ket $|\psi\rangle$ in term of basis kets $$ |\psi\rangle = \sum_n c_n |\psi_n \rangle $$ and that there can be infinitely many $n$, let's consider a state which has equal probability to be in any of these (infinitely many) basis states. The question is, what happens when you measure this state? In QM, when you measure, you're supposed to get one of the basis states as the outcome for your measurement.

Now here comes your point: You presume that, if the chance to be in any of the basis states is tiny enough, if you measure some sort of rounding error (?) will cause you to always not measure it in that basis state. You then reason that this will happen for each and every basis state, so that you can never measure $|\psi\rangle$ in any basis state, and we have a contradiction with the original statement that $|\psi\rangle$ could be expanded like we did.

Fortunately for quantum mechanics, your reasoning is flawed. If you measure a state, you will always get some outcome. There is no such thing as a rounding error in that sense in nature, this would contradict all sorts of continuity theorems and the likes.

Now, there is a more interesting, and relevant, question hidden in here. The fact is that quantum computers are extremely sensitive, and making controlled measurements without disturbing the system too much to ruin the computation is a legitimate problem in quantum computation. I added this to your question in an edit, for I think it's a question worthwhile posing. I hope this clears up the confusion for everyone involved.

Danu
  • 16,302
  • In your edit, you got what I was saying correctly. My point was that QC is extremely sensitive, so how will it work in the real world? – Craig Feinstein Mar 30 '14 at 15:04
-6

Actually, it's possible, and all thanks to the use of more powerful processors, but not common processors, that kind of technology requires cold (or less heat) like about a few nanodegrees over 0 Kelvin. With the cold the particles are not so thick, so the electrons can flow easily trough the computer. Look for a video called NOVA MAKING STUFF COLDER Also the size matters, if it's small is faster. LOOK TOO FOR NOVA MAKING STUFF SMALLER

Danu
  • 16,302