17

If you have an infinite memory infinite processor number classical computer, and you can fork arbitrarily many threads to solve a problem, you have what is called a "nondeterministic" machine. This name is misleading, since it isn't probabilistic or quantum, but rather parallel, but it is unfortunately standard in complexity theory circles. I prefer to call it "parallel", which is more standard usage.

Anyway, can a parallel computer simulate a quantum computer? I thought the answer is yes, since you can fork out as many processes as you need to simulate the different branches, but this is not a proof, because you might recohere the branches to solve a PSPACE problem that is not parallel solvable.

Is there a problem strictly in PSPACE, not in NP, which is in BQP? Can a quantum computer solve a problem which cannot be solved by a parallel machine?

Jargon gloss

  1. BQP: (Bounded error Quantum Polynomial-time) the class of problems solvable by a quantum computer in a number of steps polynomial in the input length.
  2. NP: (Nondeterministic Polynomial-time) the class of problems solvable by a potentially infinitely parallel ("nondeterministic") machine in polynomial time
  3. P: (Polynomial-time) the class of problems solvable by a single processor computer in polynomial time
  4. PSPACE: The class of problems which can be solved using a polynomial amount of memory, but unlimited running time.
  • I think a more interesting question is can a quantum computer that we could build simulate "infinite memory infinite processor number classical computer"? – Yrogirg Aug 20 '12 at 07:26
  • @Yrogirg: That's widely conjectured to be false--- that's the statement the BQP includes NP, and it's not taken seriously. It would require a quantum algorithm for an NP complete problem. Of course, proving this is hopeless, since it would prove P!=NP automatically. – Ron Maimon Aug 20 '12 at 08:12
  • I thought "infinite memory infinite processor number classical computer" should be capable of certain super-Turing computations, like testing every integer number. I was wondering whether some quantum computer could do it. – Yrogirg Aug 20 '12 at 08:39
  • @Yrogirg: Oh, I see--- that isn't true, because you need to initialize the processor state, to tell each of the processors what to do. You can simulate arbitrarily many processors on a single one, at a cost of memory and slowdown. If these processors don't share memory, and if they all the processes are branched by a fork instruction as in unix, you get a "nondeterministic" machine, and it is an open problem if it is even faster than a regular single processor machine asymptotically (although it is obviously true). This is P!=NP. – Ron Maimon Aug 20 '12 at 17:22
  • Suggestion to question(v1): For the benefit of new readers, it is a good idea to spell out abbreviations. – Qmechanic Aug 20 '12 at 19:00
  • @Qmechanic: I did it, but I am not sure it is necessary, as I was careful to state everything twice, first without jargon, then with. – Ron Maimon Aug 20 '12 at 23:52
  • 2
    @Yrogirg: Ron Maimon's statement that a quantum computer is probably couldn't not simulate an infinite memory, ininite processor computer is not only correct, it is an enormous understatement. Though Ron disagrees with me on this point, it is generally accepted in the domain of quantum information theory that quantum states don't contain even exponentially more information than classical-states. – Niel de Beaudrap Aug 21 '12 at 00:22
  • @NieldeBeaudrap: I don't know what you mean by "disagree", I only disagree that to simulate a quantum computer classically you need polynomial resources, and this is also well accepted. I think this is just terminology. – Ron Maimon Aug 21 '12 at 00:40
  • @RonMaimon: that's precisely what I mean that we disagree upon. By the techniques that we currently posess, polynomial resources are not enough. That it is not to say that it cannot be done with only polynomial resources, admitting that those resources can be prepared randomly as with coin flips. – Niel de Beaudrap Aug 21 '12 at 00:55
  • @NieldeBeaudrap: So, you think that you can factor numbers in polynomial time by flipping coins? This is why one can be certain--- you can estimate the number of coin flips you need using a heuristic argument for the hardness of factoring. To my mind, it is certain (in the scientific sense) that you can't naively factor with polynomial resources (meaning without knowing more about factoring than what you learn in Shor's algorithm), and I also believe factoring is truly nonpolynomial, from the same heuristic argument (which explains why P!=NP) so there is no polynomial quantum state. – Ron Maimon Aug 21 '12 at 01:57
  • 1
    For others: Niel is misleading and wrong about the quantum state not "having" exponential information. What he notices is that you can't encode and extract more than the classical amount of information in a quantum state, but that's not saying anything--- the representation of a quantum state at intermediate times is not by the amount of information you can get out of it through some measurement. This is the terminology difference, and it is essential: he means "what can you get out", and I mean "what do you need to say to simulate it". – Ron Maimon Aug 21 '12 at 02:33
  • 2
    @RonMaimon: This is comical. You're basically playing the role of Richard Feynman, inasmuch that he apparently couldn't be convinced that P vs NP was an open problem; only with you it's P vs BQP. I'm only saying that it hasn't been proven either way yet. That isn't to say that I believe that we can factorize with coin flips; quite the opposite, I do think that superpolynomial time is probably necessary to simulate quantum computers. But much of what is exponential in quantum states is also exponential in probability distributions, and we have no solid proofs. Does this bother you? – Niel de Beaudrap Aug 21 '12 at 12:25
  • I know what you are saying, but this representation problem of quantum states is what people have been banging their heads on for 80 years, that probability distriution have a clear computational reduction, namely Monte-Carlo, and quantum systems don't have it. I don't want to leave the impression that there is a clever reduction out there, because it makes uncertainty about the 'tHooft threads that are the main inspiration for this question. It's ok to speculate given that we don't know how to prove anything, but I consider Landauer (?) reversible computation to give good heuristics. – Ron Maimon Aug 21 '12 at 16:17
  • @NieldeBeaudrap: Stop it! I put thought into when I go jargon-busting, and this is a case where it is needed. I will never say "nondeterministic machine" in my life without prefacing it with "parallel machine", since I will never participate in erecting jargon-walls to keep outsiders out. This sort of stuff keeps the field of complexity permanently stuck in the dark ages. – Ron Maimon Aug 23 '12 at 05:06
  • 1
    @RonMaimon: I edited it because it's polemical. There is nothing misleading in calling it "nondeterministic": a computation with unboundedly many processors is only one way to describe the model, and is no more physical than one making guesses nondeterministically. You are deliberately discouraging people from acquiring the tools people would require to assess complexity literature on their own by reverting my edit which provided explanations and links to the existing concepts. If you think complexity is in the dark ages, whatever is motivating you to care about NP, anyway? – Niel de Beaudrap Aug 23 '12 at 09:48
  • @NieldeBeaudrap: What do you mean "making guesses nondeterministically"? This is the type of obfuscatory garbage people in this field write. There is no way to describe a "nondeterministic machine" as anything "nondeterministic". The reason for the name is because a forking automaton has 2 outputs for a given input, and therefore has "nondeterministic" evolution. This is a stupid convention, and I'm busting it. Explaining the thing clearly does not discouraging anything, it only shows up the incompetence of the people. I am explaining stuff simply that you folks incompetently make opaque. – Ron Maimon Aug 23 '12 at 14:03
  • 4
    @RonMaimon: A "nondeterministic" machine in the CS sense is one in which there is one processor, but no specification of which of the permitted transitions it may explore, not even probabilistically; it is simply not determined, hence the name. It's non-physical, but then that concept was defined by logicians who didn't put a priority on realism of physical evolution. If you prefer a different idiom, that's fine. But that doesn't make the standard terminology "non-standard". As for our competence or obfuscation: once you've managed to surpass the state of the art, do please let us know. – Niel de Beaudrap Sep 11 '12 at 18:00
  • @NieldeBeaudrap: You are totally annoyingly wrong. The nondeterministic machine makes all the transitions at once, so if it can go from state A to "B and C" it goes from state A to state B and state C both at once. If any of the successor states halt, it is said to halt. This is why it was called "non-deterministic", it is stupid name, it is called "parallel" by everyone else. There is no state of the art to surpass, nobody in this field has any real results. – Ron Maimon Sep 11 '12 at 18:49
  • 1
    @Ron: leaving aside what grounds you have to make authoritative statements about how models of computation are described, and how they were consequently named -- if the state of the art is in fact trivial, surpassing it ought also to be trivial. So it's heartening to hear you say so. Godspeed. – Niel de Beaudrap Sep 12 '12 at 12:29
  • @NieldeBeaudrap: The "grounds" are that I understand what a nondeterministic machine is, from reading the definition. It's a parallel machine, that's what it is, there's no debate possible. Making progress requires working on it, and it's not my favorite thing to think about (although I think it's very important). I thought a little bit this week, spurred by your challenge, but I didn't get anywhere. I am not disheartened, because this is pretty much the same as all other folks in this field. My line of thought is reversible computation and waste bits, I think this is key. – Ron Maimon Sep 12 '12 at 18:25
  • 1
    @RonMaimon: indeed it shouldn't be discouraging, but it shows that maybe "incompetence of the majority of people who working on this topic" is not an experimentally justified theory for stagnation in this field. As to definitions, the ones I know make no reference to computing in parallel either; they refer to the existence of computational branches without remarking on how they should be found (by lucky guesses or by brute force computation). Anything else is a semantic gloss. As such machines don't actually exist, and neither description is more useful, there's no basis for refutation. – Niel de Beaudrap Sep 12 '12 at 19:08
  • @NieldeBeaudrap: Parallel machines do exist, calling them "imaginary" is silly. You can do a non-slowing-down fork instruction on a machine with multiple processors, people do it all the time now, and you can imagine a machine with a large number of independent processors. This is what a nondeterministic machine is, and the tripe about it being "unphysical" or "mysterious" is annoying. I didn't say the people working in this field are incompetent, I said they are obscurantists. There's a huge difference. Logicians are competent obscurantists for example. – Ron Maimon Sep 13 '12 at 01:40
  • 1
    @Ron: there do not exist any computers which can potentially double the number of processors working on a problem at any point in time. If you're satisfied with solving problems like SAT on 30 bits, then yes, a server farm of a just over a billion networked simple processors suffices, and initialising them won't take too long with a network topology in 3+1D. But it simply doesn't scale; the structure of spacetime itself works against getting the needed resources. If you make do with a fixed # of processors, you then can no longer complete in poly-time, unless e.g. P = NP. – Niel de Beaudrap Sep 13 '12 at 09:40
  • @NieldeBeaudrap: I see what you mean--- the rate of processor allocation is too large to be physical. But it's not "unphysical" like a halting problem is. I don't like it when people call it unphysical, it's just "infinitely parallel". – Ron Maimon Sep 13 '12 at 16:16
  • 1
    @RonMaimon: would you consider infinite energy to be unphysical? If not, why not the energy required for infinite parallelism? What if we took all of that energy and put it into a single processor to make it compute infinitely quickly (as in a 'Zeno' processor), to actually obtain answers to the halting problem? To me, these questions of infinity (or exponentials) are not identical, but they are certainly equivalent in that they are resources which we could never hope use to obtain locally an answer to a difficult computational problem. So they are all unphysical as far as I'm concerned. – Niel de Beaudrap Sep 13 '12 at 16:22
  • @NieldeBeaudrap: Ok, ok, we agree on this, it's just a question of "potential infinity" and how you explain things. I don't like explaining things that are simple so that they sound mysterious--- this is obscurantism--- and if a student asks you "what is a nondeterministic machine?" You can say "A machine with so many processors that UNIX's 'fork' instruction is cost free, no matter how many times you use it." – Ron Maimon Sep 13 '12 at 16:59
  • @NieldeBeaudrap: By the way, this does lead to a simple thing I don't see anywhere in the literature--- if you allow the machines to keep a label identifying the other processes, and trade their results with other machines, compare notes as they are running, I think you get a bigger class intermediate between NP and PSPACE. Let me call this hypothetical class "SHM-P" (for unix shm--- shared memory). This is the natural polynomial thing that strictly includes BQP and that isn't PSPACE (at least not obviously). – Ron Maimon Sep 13 '12 at 17:05
  • @RonMaimon: we're converging on an approximate agreement; though your characterization of a nondeterministic machine would be like me describing an electron as a tiny hard ball of electrical charge which spins on its axis, but which is so sensitive to magnetized measurement devices that it jitters and swings to align with or against any large magnetic field it encounters -- it's a coarse description which misses much. As for labelled processes: the problem is to define how the "processors" would exchange results in a way which agrees with tensor product structure / accounts for entanglement. – Niel de Beaudrap Sep 13 '12 at 17:10
  • @NieldeBeaudrap: We are not converging on anything! I have not changed my mind in any way in this discussion--- you are wrong to say my characterization is false stop it, it is not a coarse description, it's the friggin definition! It's a fine characterization of what a "nondeterministic machine" is. NP is a trivial, obvious concept. You are wrong about entanglement--- you can simulate exact BQP in SHM-P, it is easy to do iterated matrix multiplication on this machine. You asked for something which surpasses the state of the art, SHM-P is it. – Ron Maimon Sep 13 '12 at 17:47

3 Answers3

22

This has been a major open problem in quantum complexity theory for 20 years. Here's what we know:

(1) Suppose you insist on talking about decision problems only ("total" ones, which have to be defined for every input), as people traditionally do when defining complexity classes like P, NP, and BQP. Then we have proven separations between BQP and NP in the "black-box model" (i.e., the model where both the BQP machine and the NP machine get access to an oracle), as mmc alluded to. On the other hand, while it's very plausible that those would extend to oracle separations between BQP and PH (the entire polynomial hierarchy), right now, we don't even know how to prove an oracle separation between BQP and AM (a probabilistic generalization of NP slightly higher than MA). Roughly the best we can do is to separate BQP from MA.

And to reiterate, all of these separations are in the black-box model only. It remains completely unclear, even at a conjectural level, whether or not these translate into separations in the "real" world (i.e., the world without oracles). We don't have any clear examples analogous to factoring, of real decision problems in BQP that are plausibly not in NP. After years thinking about the problem, I still don't have a strong intuition either that BQP should be contained in NP in the "real" world or that it shouldn't be.

(Note added: If you allow "promise problems," computer scientists' term for problems whose answers can be undefined for some inputs, then I'd guess that there probably is indeed a separation between PromiseBQP and PromiseNP. But my example that I'd guess witnesses the separation is just the tautological one! I.e., "given as input a quantum circuit, does this circuit output YES with at least 90% probability or with at most 10% probability, promised that one of those is the case?")

For more, check out my paper BQP and the Polynomial Hierarchy.

(2) On the other hand, if you're willing to generalize your notion of a "computational problem" beyond just decision problems -- for example, to problems of sampling from a specified probability distribution -- then the situation becomes much clearer. First, as Niel de Beaudrap said, Alex Arkhipov and I (and independently, Bremner, Jozsa, and Shepherd) showed there are sampling problems in BQP (OK, technically, "SampBQP") that can't be in NP, or indeed anywhere in the polynomial hierarchy, without the hierarchy collapsing. Second, in my BQP vs. PH paper linked to above, I proved unconditionally that relative to a random oracle, there are sampling and search problems in BQP that aren't anywhere in PH, let alone in NP. And unlike the "weird, special" oracles needed for the separations in point (1), random oracles can be "physically instantiated" -- for example, using any old cryptographic pseudorandom function -- in which case these separations would very plausibly carry over to the "real," non-oracle world.

  • "We don't have any clear examples analogous to factoring, of real decision problems in BQP that are plausibly not in NP", I accepted mmc's answer because I thought "recursive Fourier sampling" is an example of this. Regarding oracles and the real world, NP oracles are not uncomputable, they are just slow to compute, so you can realize them in real world. – Ron Maimon Aug 21 '12 at 15:53
  • 2
    Recursive Fourier Sampling is an oracle problem; we don't know how to realize it in the non-oracle setting. (Also, it only gives an n vs. n^(log n) separation; if you want an n vs. exp(n) oracle separation check out my BQP vs. PH paper.) And yes, most of the oracles we talk about are computable, but if they're exponentially slow, then simulating them might negate the complexity separation that was our original goal. – Scott Aaronson Aug 21 '12 at 15:59
13

There is no definitive answer due to the fact that no problem is known to be inside PSPACE and outside P. But recursive Fourier sampling is conjectured to be outside MA (the probabilistic generalization of NP) and has an efficient quantum algorithm. Check page 3 of this survey by Vazirani for more details.

mmc
  • 1,869
13

To add to mmc's response, it is currently generally suspected that NP and BQP are incomparable: that is, that neither is contained in the other. As usual for complexity theory, the evidence is circumstantial; and the suspicion here is orders of magnitude less intense (if we pretend that strength of suspicion is measurable) than the general hypothesis that P ≠ NP.

Specifically: as Aaronson and Archipov showed somewhat recently, there are problems in BQP which, if they were contained in NP, would imply that the polynomial hierarchy collaspes to the third level. Restricting myself to conveying the significance of this complexity theorist jargon, any time they talk about the "polynomial hierarchy collapsing" to any level, they mean something which they would regard as (a) quite implausible, and consequently (b) disasterous to their understanding of complexity on the level of the transition from Newtonian mechanics to quantum mechanics, i.e. a revolution of comprehension to be informally anticipated perhaps no more frequently than once every century or so. (The ultimate crisis, a total "collapse" of this "hierarchy", to the zeroeth level, would be precisely the result P = NP.)

  • +1: The paper you link is great, thanks. BTW: saying how implausible people find something isn't evidence without an argument: one should just make up a simple nonrigorous argument to explain why stuff is hard. It's easy for NP and factoring, but for the higher levels of the polynomial hierarchy, I never tried. – Ron Maimon Aug 21 '12 at 02:30
  • 2
    Ron, if you do try, I predict you'll be able to find a "simple nonrigorous argument" by which to convince yourself that the polynomial hierarchy should indeed be infinite! (To calibrate, I'm much more confident of that than I am that factoring is classically hard.) Just take whatever intuition you've already used to convince yourself that P!=NP, and try extending it to convince yourself that NP!=coNP. Then try convincing yourself that P^NP != NP^NP. Then conclude, by "physicist induction", that ALL these classes should be distinct! :-) – Scott Aaronson Aug 21 '12 at 08:46
  • @RonMaimon: I agree with you on the implausibility front. I prefer actual proofs, personally. Of course, the proofs (if they exist) are nevertheless expected to be difficult to find, because no-one's succeeded yet. I'm really just representing the sociopolitical import of those claims. – Niel de Beaudrap Aug 21 '12 at 12:30
  • @ScottAaronson: The intuition I use (which concievably might be made rigorous in some way) is based on the minimum number of waste-bits during a reversible computation, and when you introduce an NP oracle, as in higher levels of the heirarchy, you have to compute the oracle value and those waste bits are not counted in a simple way, so I can't do the heuristic immediately. It's probably simple to fix. This heuristic is better than the heuristics I see in the literature, I tried to make it a proof once, but it's hard to prove minimality of waste-bits in a reversible implementation. – Ron Maimon Aug 21 '12 at 16:04
  • ... to convince yourself factoring is hard, consider the minimum number of bits in a reversible computation implementation of multiply. Then a full search over these bits is required for sure, and there are enough waste-bits to tell you that the problem is hard. – Ron Maimon Aug 21 '12 at 16:06
  • 1
    Ron, I don't quite understand your argument for why factoring should be hard, but it seems like it can't possibly work. For how does your argument deal with the existence of algorithms like the Number Field Sieve, which classically factors an n-bit integer using ~exp(n^{1/3}) steps, still exponential but much much faster than a "full search"? Note that, like any algorithm, the Number Field Sieve can be implemented reversibly with only a constant-factor slowdown. – Scott Aaronson Aug 21 '12 at 17:35
  • @RonMaimon: can you outline how your waste-bits analysis would proceed with an efficiently solvable problem, such as 2-CNF-SAT? – Niel de Beaudrap Aug 21 '12 at 18:16
  • @ScottAaronson: The number of waste bits is not that large for factoring, it doesn't scale linearly with the number of bits, you need to know the information loss in multiplication. I forget what the right scaling is, I did this years ago, but I remember that it comes out hard, but not fully exponentially hard. I could reproduce it in a bit, but I haven't thought about it in a while. – Ron Maimon Aug 21 '12 at 20:16
  • @NieldeBeaudrap: The idea is that the forward computation of 2-sat doesn't need many waste bits, you can implement it with a number of waste bits only scaling as the log, and you can see this from the solution of the backward problem. I honestly don't remember the details, I did it a long time ago. – Ron Maimon Aug 21 '12 at 20:20
  • 2
    This sounds either way too good to be true -- like your "method of counting waste bits" will revolutionize theoretical computer science, by giving us at least heuristic answers to all the great unsolved problems -- or else like you simply have some way to map the best known conventional algorithms into this framework. So yes, details please! (Since it's a bit off-topic, go ahead and post them somewhere else, or email me and Niel.) – Scott Aaronson Aug 21 '12 at 21:42
  • @RonMaimon: ditto each sentence of Scott's previous comment. – Niel de Beaudrap Aug 21 '12 at 22:40
  • @Ron: Since I've learned a lot from reading your physics.SE posts, I find it sad that you'd react angrily to what, at least on my part (and I imagine on Niel's), was a genuine request for explanation about something that frankly sounds incredible to people who work in this field. (The history of CS is rife with wrong claims about which algorithms were "obviously unimprovable" on heuristic grounds!) Since you wanted questions, here are a few: does your heuristic method tell you what the true complexity of the graph isomorphism problem should be? How about matrix multiplication? – Scott Aaronson Aug 22 '12 at 01:09
  • @RonMaimon: as to junk arguments, I'm just telling you how (other) people talk about things: I'm not active nor expert in the polynomial hierarchy. I'm puzzled that you should be so angry at other people's intuitions (not yours, nor incidentally mine), when you clearly place such store by your own means of generating them that you dismiss my counterpoints as misrepresentation. Is there any mode that we can communicate where I can avoid brushing you the wrong way, in those instances where we both have something to say where "knowledge" gives way to "ideation"? – Niel de Beaudrap Aug 22 '12 at 02:18
  • @ScottAaronson: I erased my ridiculous comment, and I am truly sorry. I had a knee jerk borderline psychotic paranoid reaction. I might be full of crap on this, I don't remember the argument very well, and I never applied it to specific problems other than factoring. The idea is to look at the number of waste-bits in a reversible implementation of the only NP problem I cared about--- figure out the initial memory state of a universal computer given the instruction and the final memory state. This obviously is the granddaddy of all other NP-complete problems. – Ron Maimon Aug 22 '12 at 02:44
  • @NieldeBeaudrap: I apologize to you too, my reaction was unacceptable, and I deleted it. (continued) "figure out a starting state from end-state" is obviously NP complete on an irreversible computer. it is trivial on a reversible computer, you just run the computer in reverse. So the idea was that you keep track of the junk bits that are thrown away for the irreversible computation, and arrange to minimize these. This gives you the entropy of the problem going forward. The intuition I had was that the optimal reversing entropy is is the log of the size of the search space when going backwards. – Ron Maimon Aug 22 '12 at 02:52
  • ... I supposed this was everyone's intuition in the field regarding why NP complete problems were hard. I can't say anything about decision problems directly, because I was thinking about taking an initial state to a final state, not to a bit. There is a major issue with making the argument in that you might need to consider many copies of the problem computed reversibly in parallel all at once, and sharing waste bits, so as to minimize the entropy production of the computation, the same way Shannon's entropy is only found on copies. It's really half-baked, hence my defensive psychotic reply. – Ron Maimon Aug 22 '12 at 02:56
  • Thanks; apology gratefully accepted! I've been there, man: when I was an undergrad, I also thought I should ignore whatever had been written about P vs. NP, and figure out the "right" way to think about complexity theory from scratch. But repeated experiences following stupid dead ends, reinventing the wheel, etc. finally drove home that the people who'd thought about these things before were not all fools. Incidentally, the funny thing about NP-complete problems is of course that, by definition, every NP-complete problem gets to be the "granddaddy" of all the other ones! – Scott Aaronson Aug 22 '12 at 03:04
  • @ScottAaronson: Unless you had the exact same idea that I had regarding reversible waste-bits, you haven't been there, you are making an analogy. Reinvention is important, but this is not an example. It is not in the spirit of the idea that "every NP complete problem gets to be the granddaddy", because they are only so because Cook showed they are equivalent to the problem I gave. I don't think the people studying this are fools, it's just completely clear that they have absolutely not the slightest hint of an idea about how to prove anything, not even a heuristic argument. So I made one up. – Ron Maimon Aug 22 '12 at 03:09
  • @ScottAaronson: Also, the field essentially has one real idea-- relativize to an oracle--- and this is how you make money and get positions in the field. This means that most papers is buried in mind-numbing brain-destroying jargon, and impenetrable rigor, and avoid simple CS style algorithm descriptions. Most papers I see are rewrites of one approach in a thousand slightly different ways (there are exceptions of course). I find it hard to read this literature because of this, it's like reading logic, which also has a paucity of ideas compared to papers. – Ron Maimon Aug 22 '12 at 03:14
  • First: I'm not addressing your idea because I don't understand it. Why should we expect that bounding the number of waste bits in a reversible computation (rather than some other quantity, like number of gates, or number of memory bits in an irreversible computation) would give any particular insight? Write up something more detailed, and I'll be happy to read it and form an opinion. – Scott Aaronson Aug 22 '12 at 03:16
  • @ScottAaronson: Because you can reverse the computation starting with any waste bits by running the computer backwards, and only if you guessed the right final value of the waste-bits, you end up zeroing them all out when you are done reversing. If you try to use as few waste bits as possible, I felt you should be maximally compressing them as you compute, then they end up effectively random (otherwise you could compress them further), so the remaining waste-bit number is the minimal hard-to-guess part of the forward computation. – Ron Maimon Aug 22 '12 at 03:23
  • Second: I'd say your assessment of the state of complexity theory is fairly accurate circa 1975. Today we do have non-relativizing results, from IP=PSPACE and the PCP Theorem to Williams' NEXP vs. ACC breakthrough last year to Mulmuley's GCT program. Are you familiar with these? They're all still a hell of a long way from P!=NP, but that's the wrong metric: they pretty obviously use nontrivial new ideas to answer hard complexity questions that people couldn't answer before. Dismissing them is like dismissing string theory because it hasn't brought us closer to explaining the muon mass. – Scott Aaronson Aug 22 '12 at 03:28
  • (In a similar vein, I find HEP papers full of mind-numbing brain-destroying jargon and impenetrable handwaving! Must not be too much worth understanding there...) – Scott Aaronson Aug 22 '12 at 03:30
  • @ScottAaronson: I don't know these, thank you for pointing them out! I am happy to learn, and I hope that the ideas are good. The analogy with HEP is not really right, the physicists generally bend over backwards to eliminate unnecessary jargon, and speak as much as possible in homey metaphors. There is a political cabal in physics that will smack you if you don't say things as clearly as you possibly can, it's very nice, and I wish other fields had it. – Ron Maimon Aug 22 '12 at 03:31
  • Let me see if I understand your idea: first you fix a "universal" reversible circuit C? (Otherwise, how do you know which reversible computation to run backwards, once you've guessed the final values of the waste bits?) Then given (say) a composite integer N and its prime factors p,q, you search for a final value b of the waste bits, such that C^{-1}(p,q,b)=(N,a) for some other string a (which you can think of as the "instructions" to C)? Of course, even after you've found such an a, there's no guarantee it will work for some other composite integer N'. – Scott Aaronson Aug 22 '12 at 03:40
  • It's amazing how strongly people's perception of "jargon-ness" can just reflect their background. I'd heard about how important AdS/CFT was, so I went through the papers on it, hoping to learn the new conceptual insights about spacetime in quantum gravity ... and instead found technical constructions involving stacks of D3-branes. Which CS theory papers are you reading? Whichever they are, I can probably point you to better ones. In the proceedings of the major CS theory conferences (STOC, FOCS, CCC, ...), the first few pages of every paper are just history, motivation, high-level ideas... – Scott Aaronson Aug 22 '12 at 03:48
  • @ScottAaronson: Not exactly--- you start with a reversible computer, and input (I,p,q,0) where 0 is a string of zeroed out bits, I are the instructions for multiply (which you don't touch) and p and q are the inputs. Then you runs it forward, and you get (I,pq,J) where J is junk. You now ignore J and you get the output of the regular machine: (I,pq). If you just add bits to the answer, you reverse the computation to get (I,p',q',J'). If you guessed right J' is 0, as you initialized the machine originally. If you make the least wasteful algorithm (perhaps on copies) I felt you need to search J. – Ron Maimon Aug 22 '12 at 05:44
  • @ScottAaronson: Regarding jargon and physics, the jargon-free holography insights are better found in 'tHooft's papers in Nuclear Physics B in the period 1985-1990, but they contain a few inaccuracies. The Susskind papers from '90-96 also contain these insights. The Maldacena paper builds somewhat on previous string and supergravity papers, and is less accessible. – Ron Maimon Aug 22 '12 at 05:49