6

Since we know that there are some oracle problems which can be solved on a quantum computer, but not on an NP machine with the same oracle, the idea of nondeterministic (i.e. infinitely parallel) machine is not sufficient to describe what is going on in quantum mechanics.

The question then arises--- what is? What is the natural classical machine which can simulate a quantum computer efficiently in polynomial time? What is the complexity class of this natural machine?

2 Answers2

6

The smallest 'simple' complexity class which is known to contain BQP (and suspected to do so strictly) is the class PP. As PP is contained in PSPACE, this yields a potentially tighter algorithm in your hypothetical machine model.

Translating from a more traditional description of PP in terms of nondeterministic Turing machines, a generic computation for solving a PP problem (which are 'yes/no' problems, like those in P and in NP) looks like some branching program of the sort you're interested in, and where each of the 'threads' submits a vote for whether the answer is 'yes' or 'no'. If the majority (fifty percent plus one) vote 'yes', then the answer which the machine produces is 'yes'; otherwise it produces a 'no' answer. It is straightforward to show that PP contains NP; and PP was proven to contain BQP by

however, I find that a simpler approach to the proof is presented by

which, like the traditional proof that BQP is contained in PSPACE, uses an approach in terms of a sum-over-paths; but unlike that approach restricts itself to paths with weights $\pm 2^{-n/2}$.

  • Yes, this is the best answer. I still like SHM-P, because it's a nice complexity class that nobody studies because of their codeless upbringing. – Ron Maimon Sep 14 '12 at 00:41
1

First let us note that if you extend C to infinite memory, and consider running UNIX on the Turing machine, then an NP machine is one which is allowed to the UNIX fork instruction, and produce two independent processes with a duplicated copies of memory, with no time cost, and the program terminates when exactly one of the outputs terminates.

That this is true is easy to prove: Given any nondeterministic automaton, fork on each step according to the number of outcomes. When any fork halts, you kill all the other processes. This simulates a nondeterministic machine with "fork". to go the other way, simulate UNIX on your nondeterministic machine, and have a nondeterministic step at each "fork". They are equivalent concepts.

The natural generalization of this is to use the UNIX threading instruction to produce parallel threads rather than parallel processes. In this case, the processes can share memory with each other, but one has to be careful, because exponentially many processes will be using exponentially much memory, so they can't search all of it. With less risk of mistake, you can allow the processes to send fixed length messages to another process, whose process label they already know. This is equivalent to allowing any pair to share memory, since syncing all the memory you used until time t only takes time polynomial in t.

Observation: A probabilistic version of this machine can simulate any quantum process.

Given a finite size exponentiated-Hamiltonian U matrix on N states, you want to compute the quantum evolution to time T, then reduce the state according to a measurement, then compute the quantum evolution again. To do this, you fork a machine to simulate each path in the path integral, and keep track of it's U-matrix weight. You keep track of the final state of each forked process.

Then you congeal the processes by sending a message to the nearest processor with the same final state, and adding your amplitudes, shutting down the processor with the smaller number. This congeals your state to half the states. Then you congeal again, and in log(T) steps, you know the amplitude for every state. This also allows you to rotate by a Unitary you can construct before making a measurement.

Then you square this amplitude for each state, and you pick another process with a square amplitude, and pick one of the two at random according to the square amplitude. Again, after log steps, you have picked one of the processes according to the square amplitude.

This means that BQP is inside SHM-P. SHM-P includes NP so it is not reasonable that it is BQP. It shouldn't be P-space, since you are still limited to polynomial time computation on any of the threads.

  • This appears to be the classic sum-over-paths argument for why BQP is contained in PSPACE (a result which Scott Aaronson has more than once described as the grounds for Feynmann's Nobel Prize). Whether this same algorithm suffices to show a stronger containment is not clear. – Niel de Beaudrap Sep 13 '12 at 18:34
  • As an aside, regarding your machine model: how do you ensure that the first process to 'halt' is one which yields an outcome of 'success' (e.g. finding a satisfying assignment to SAT)? Or, if the failing threads do not halt, how do you treat the case where none of the branches will succeed (as for an unsatisfiable formula)? – Niel de Beaudrap Sep 13 '12 at 18:35
  • @NieldeBeaudrap: 1. You execute halt instruction on a process only when you succeed 2. you have a global process that just counts polynomial time and halts with "fail" if nothing else halts first. Regarding the Feynman sum-over-paths, this is also obvious from matrix quantum mechanics too or old-fasioned time-dependent perturbation theory, Feynman's contributions are deeper--- the relation to imaginary time, the relativistic particle formalism, and the first relativistically invariant regulator. The same argument shows containment in SHM-P, as I showed above, and SHM-P is not PSPACE. – Ron Maimon Sep 13 '12 at 18:39
  • Fair remarks for the machine model. Regarding Scott's remarks, naturally Feynmann showed more: I should have noted that his statements of that sort are tongue-in-cheek. --- Do I take it that you consider this a proof that P is strictly contained in PSPACE (given that you describe something which can simulate any algorithm in P, and which you believe cannot solve PSPACE complete problems)? – Niel de Beaudrap Sep 13 '12 at 18:52
  • @NieldeBeaudrap: regarding P<PSPACE, of course this doesn't prove anything of the sort! What kind of nonsense is this--- it is not a proof of anything except that SHM-P simulates BQP in polynomial time. There is no progress in this on the main open problems. – Ron Maimon Sep 13 '12 at 19:03
  • Sorry; I'm only clarifying as it's clear that SHM-P contains P, while you seem to assert that SHM-P is not PSPACE (while it seems as though it should be contained by PSPACE). It appears I don't really know how to evaluate what claims you would like to make, so I'll leave alone. – Niel de Beaudrap Sep 13 '12 at 19:13
  • "With less risk of mistake"... you may have to distinguish between average-case complexity and worst-case complexity. – Mitchell Porter Sep 13 '12 at 23:00
  • "kill all processes" has to happen instantaneously in this branching picture of NP, there can be exponentially many branches. The better, and more logical positivist, description of NP is as deterministic computation on input plus some advice bits. – zyx Sep 13 '12 at 23:51
  • @zyx: yes, kill all processes is instaneous (output and unplug machine). The reason this description is preferrable is because fork is actually used in the real world, and the generalization SHM is obvious in this picture, and not in others. – Ron Maimon Sep 14 '12 at 00:35
  • @MitchellPorter: I don't have to distinguish anything like that. SHM-P is a perfectly normal complexity class nobody talks about. One has to be careful to make sure that the communication is polynomial time, so you can't just use other processes memory willy-nilly, you need to pick a process and start a dialog, and this is steps of computation. – Ron Maimon Sep 14 '12 at 00:36
  • What is the definition of SHM-P? – zyx Sep 14 '12 at 00:47
  • @zyx: It's something I made up above: it's the decision problems you can solve in polynomial time using a forking machine (nondeterministic machine with an integer label for each branch) with communication between processes, so that process number i can pick process number j, at stage n, and set up a communication channel, and transmit data back and forth (with fixed time cost for each bit transmitted and recieved). This might be equivalent to PP, I don't know, since you can do polynomial polling to get PP outcome, and conversely, perhaps you can guess the communication and check. – Ron Maimon Sep 14 '12 at 01:10
  • I don't see how addressed messaging (by fixed deterministic programs) would help; any computational power from messaging to nearby paths in the Hamming metric can probably be done without communication, by modifying the program, and knowing which faraway paths to communicate with is the same as asking for NP or PRAM/parallel power from a P device. So as you say, maybe it is PP. The paper by Aaronson on PostBQP = PP sounds relevant but that's my superficial impression from hearing about it on Wikipedia a few minutes ago. – zyx Sep 14 '12 at 01:35
  • @zyx: adressed messaging gets past NP, because it allows you to simulate BQP. To know which distant machine you are supposed to poll for simulating BQP (and many other problems) does not require exponential resources--- you just arrange the process labels in such a way that you know the Feynman path endpoint from the label without requiring a search. PP is equivalent to the most trivial communication between processes-- namely polling, allowing arbitrary communication, it's win-win, if SHM-P==PP, then SHM-P is the obvious way to prove BQP in PP, not equivalent SHM-P is more natural IMO. – Ron Maimon Sep 14 '12 at 02:18