32

Alan Turing proposed a model for a machine (the Turing Machine, TM) which computes (numbers, functions, etc.) and proved the Halting Theorem.

A TM is an abstract concept of a machine (or engine if you like). The Halting Theorem is an impossibility result. A Carnot Engine (CE) is an abstract concept of a heat engine and Carnot proved the Carnot Theorem, another impossibility result related to thermodynamic entropy.

Given that a TM is physically realizable (at least as much as a CE, or maybe not?) is there a mapping or representation or "isomorphism" of TM or CE which could allow to unify these results and in addition connect to entropy?

There are of course formulations of TM and the Halting Theorem in terms of algorithmic information theory (e.g. Chaitin, Kolmogorov etc.) and entropy (in that context). The question asks for the more physical concept of entropy (if in the process of a potential answer algorithmic entropy arises it is fine, but it is not what the question asks exactly).

One can also check another question in physics.se which relates quantum uncertainty with the 2nd law of thermodynamics. See also: an algebraic characterization of entropy, an algorithmic characterization of entropy, a review and connections between various formulations of entropy

Nikos M.
  • 957
  • 6
  • 16
  • 1
    there is one sense in which the concepts delineated are exactly opposite. theormodynamics laws about rise of entropy rule out a perpetual motion machine. a nonhalting machine is a perpetual motion machine. – vzn May 25 '14 at 21:17
  • yep i see, re-casting the no-halting condition as a perpetuum mobile (of the 2nd kind?), this is exactly in the spirit of the question, but is this what the halting theorem says? It states we do not know if it halts or not, due to "circularity", nice – Nikos M. May 26 '14 at 14:17
  • A proposal to add "thermodynamics" and/or "thermodynamics-computation" as new tags in CS.se? i am not sure if i can do it by myself (probably), but lets hear other opinions – Nikos M. May 26 '14 at 15:04

7 Answers7

13

I am not at all an expert in this area, but I believe you will be interested in reversible computing. This involves, among other things, the study of the relationship between processes that are physically reversible and processes that are logically reversible. I think it would be fair to say that the "founders" of the field were/are Ralph Landauer and Charles H Bennett (both of IBM research, I think.)

It touches on quantum computing and quantum information theory, but also examines questions like "what are the limits of computation in terms of time, space and energy?" It is known, (if I remember correctly) that you can make the energy required to perform a reversible calculation arbitrarily small by making it take an arbitrarily long time. That is, energy $\times$ time (=action) required to perform a reversible computation can be made a constant. This is not the case for non-reversible computations.

Many of the people studying in this area are also working on quantum computing and digitial physics (the idea that the universe is a big quantum cellular automata). The researchers names that come to mind are Ed Fredkin, Tommaso Toffoli and Norm Margolus.

These questions are absolutely on topic for computer science. Not just for the theory (which includes cool math as well as cool physics) but for engineers who want to know the ultimate limits of computation. Is there a minimum volume or energy required to store a bit of information? The action required to perform a reversible computation may be constant, but are there limits on what that constant is? These are critical knowledge for engineers trying to push the boundaries of what is possible.

Wandering Logic
  • 17,743
  • 1
  • 44
  • 87
  • Yes there is relation to the Thermodynamics of Computation (Bennett, Landauer et al.), but asking more in relation to the Halting Theorem, and/or mapping between TM and CE (as in question), but nice answer – Nikos M. May 24 '14 at 14:32
  • 1
    Ah, you're right. I'm downvoting my answer. The comments under your question saying that it was off topic made me see red, and I was mainly responding to that. In answer to your real question: look at the Church-Turing Thesis. Assuming you believe that and also that mathematics can model anything in nature then the Halting Problem is a physical impossibility theorem. – Wandering Logic May 24 '14 at 19:25
  • i think the Church-Turing thesis that physical computation is effective computation might be necessary indeed, take a look at this paper also – Nikos M. May 25 '14 at 00:40
6

I'm not familar with Carnot's Theorem, except what I've just read in Wikipedia, but even from that cursory introduction, there is a connection in the structure of the proofs, and that may be interesting to you, as it's a proof technique that is applicable in many domains.

They're both proofs by contradiction in which to show that no thing in a given class has some property, you suppose that some instance actually does have that property, and then show that a contradiction follows.

The Halting Problem is interesting in that the contradiction arises from some self-interaction concerning the particular instance (which is a machine M that can determine whether an arbitrary machine will halt with a given input). In particular, you construct a new machine that includes M as a component, and then feed the new machine to M.

Someone with more knowledge about Carnot's Theorem could elaborate on it (which I'm not qualified to do), but it appears that the contradiction arises from the type of heat engine that you could build if you had an instance with the property at hand.

So both cases involve the construction of:

  • Suppose some X has property P.
    • From X, build related Y.
    • The relationships between X and Y are contradictory.
  • Therefore, no X has property P.

There does appear to be a difference, though, in that the contradiction in the Halting Theorem case is a pure logical contradiction, and would be contradictory in any setting of classical logic. The Carnot Theorem, as I understand it, is only contradictory with respect to the second law of thermodynamics. From a logical perspective, that's an axiom, so if you took a different axiomatization in which the second law of thermodynamics didn't hold, Carnot's Theorem wouldn't be a theorem, because the contradiction wouldn't exist. (What a formalization of thermodynamics would look like without the second law is the sort of question that led geometers to non-Euclidean geometry.)

  • this paper provides much in the direction you mention, imo. Also what i think is very relevant is the circularity (or diagonalization) of the arguments. There are research directions which connect irreversible logical transforms with irreversible thermodynamic processes (eg Landauers Principle, and objections thereof). There are objections to some statements of the 2nd Law but one can find formulations that still hold (eg Prigogine's work) – Nikos M. May 25 '14 at 00:49
  • For how this connection might come about see also comments on previous answer (only for plausibility purposes) – Nikos M. May 25 '14 at 00:50
  • Regarding other formulations of the 2nd Law (even more general and for non-equilibrium processes) you can check Caratheodory's statement in terms of Phase Space and Geometry, Prigogin's Work on non-equilibrium systems and Hatzopoulos-Gyftopoulos-Beretta formulation (with further connections to quantum mechanics) – Nikos M. May 25 '14 at 01:16
  • In a sense there so many facets of entropy as there are facets of Goedel's theorem(s) (as in Turing's halting theorem, Tarski's undefinability theorem, Rosser's theorem, Chaitin's incompleteness theorem), there is even a category-theoretical proof of a "general Goedel Theorem" encompassing all the previous ones which is based on fixed points – Nikos M. May 25 '14 at 01:39
  • Even if a connection between halting problem and thermodynamic entropy is achieved in the form of if and when the 2md Law holds then ..., it is still good as fas as this question goes (related to the objection that 2nd Law might be like the 5th postulate on parallels in euclidean geometry) – Nikos M. May 26 '14 at 13:50
4

IANAPhysicist but I don't see any connection. Turing machines are objects of pure mathematics and the undecidability of the halting problem is independent of any physical realization of anything.

David Richerby
  • 81,689
  • 26
  • 141
  • 235
  • 2nd Law Impossibility results have much in common to (mathematical) logic problems and circularities, maybe a connection there? – Nikos M. May 24 '14 at 14:28
  • 1
    You'd have to give more detail: as I said, I'm not a physicist. But I don't see how physical laws can have any impact on a construct that exists independently of physical reality. – David Richerby May 24 '14 at 14:54
  • you have a point there, i can give many epistemological reasons why this is very plausible (eg the mathematics we do depend on the world we live, a-la Einstein), but i want sth beyond that, if i had a ready answer i would probably publish a paper :) – Nikos M. May 24 '14 at 15:05
  • additionally i do have some (vague at this point) ideas about how this connection may come about, eg. maybe having a cascade of a TM with CE which one would violate the Halting Theorem, the other would violate the 2nd Law of TD, or other variations thereof, but of course someone might have even better ones – Nikos M. May 24 '14 at 15:11
  • maybe this link will provide further understanding of the concepts, history and interplay between information/computation/mathematics/physics etc.. – Nikos M. May 24 '14 at 16:13
  • +1, its a fair pov that many in CS would agree with. however note a TM is a machine and has semi-physical concepts associated with it such as time and space etcetera.... – vzn May 24 '14 at 21:15
  • 2
    @vzn We use the word "time" for the number of steps the machine has executed and "space" for the number of tape cells it has used but those words were chosen to appeal to our physical intuition as physical beings. But "time" is just an index into a sequence of configarations and space is just an index into a sequence of symbols. For example, consider a Turing machine where the head just whizzes off to the right. It uses infinite "time" and infinite "space" but you can figure that out in a finite amount of real time and real space – David Richerby May 24 '14 at 23:23
  • The extreme general nature of both constructs TM and CE (eg TM can simulate and compute every computable function/process and CE can be used as model of extremely many physical systems and processes) may in fact imply that this connection (maybe by a clever arrangement) is easier than initally thought, on the other hand it can be just wrong, although i dont think so – Nikos M. May 25 '14 at 01:07
  • 2
    Sure, but the fact that we consider Turing machines to be interesting objects may have something to do with physics. – Gilles 'SO- stop being evil' May 25 '14 at 19:19
2

this diverse multiple-topic question unf does not have a simple/easy answer and touches on active areas of TCS research. however it is a rare question asking about a link between physics & TCS that has interested me over the years. there are a few different directions to go on this. the basic answer is that its an "open question" but with some active/modern research touching on it and hinting at connections.

  • there are some surprising/deep undecidable problems from advanced physics. for example from dynamical systems. however, have not seen this connected to entropy per se, but entropy is associated with all physical systems (eg one can see this in chemistry theory), so there must at least be an indirect link.

  • entropy indeed shows up in CS but more in the form of information theory and coding theory. the birth of coding theory involved the definition/analysis of entropy associated with communication codes by Shannon. try this great online ref Entropy & Information theory by Gray

  • entropy is also associated sometimes associated with measuring randomness in PRNGs. there is a connection of complexity class separations (eg P=?NP) to PRNGs in the famous "Natural Proofs" paper by Razborov/Rudich. there is continuing research on this subj.

  • you mention thermodynamics and its connection to TCS. there is a deep connection between magnetization in spin glasses in physics and NP complete problems studied in the SAT transition point. there (again) the physical system has an entropy associated with it but it has probably been studied more in a physics context than a TCS context.

vzn
  • 11,034
  • 1
  • 27
  • 50
  • can expand on some of this at length in [chat] – vzn May 24 '14 at 20:59
  • see also CS defn of entropy stackoverflow – vzn May 24 '14 at 21:12
  • it is interesting to be able to think "out of the box" (at least sometimes), have you looked into Bennet's work on Thermodynamics of Computation? The motivation behind the question is to show if the halting theorem can be seen as a consequence of thermodynamics (with some appropriate model or representation at least for some cases). i think it would be really interesting if this could be settled either way – Nikos M. May 25 '14 at 00:59
  • maybe you have already seen these papers, http://philsci-archive.pitt.edu/313/1/engtot.pdf and http://www.cc.gatech.edu/computing/nano/documents/Bennett%20-%20The%20Thermodynamics%20Of%20Computation.pdf – Nikos M. May 25 '14 at 01:02
  • Most concepts of "entropy" as used in computer science either relate to Shannon's Information Theory or to Kolmogorov/Chaitin/Solomonov Algorithmic Information Theory, this is already mentioned in the question and it is very important. The only conections to the thermodynamic entropy that i am aware (which can be related to inf. entropy) is the thermodynamics of computation. The question is related to thermodynamics of computation but in another way – Nikos M. May 25 '14 at 01:49
  • There is criticism when using specific physical models to connect to thermodynamics of computation (eg by Norton et al) however there are formulations which use a more general setting here and objections – Nikos M. May 25 '14 at 01:58
  • which physical undecidable problems do you refer to? – Nikos M. May 25 '14 at 02:48
  • @Nikos there do not seem to be surveys of undecidable problems in physics, havent seen one yet, the problems exist but are scattered, but for some leads try this tcs.se question computing with dynamical systems – vzn May 25 '14 at 21:10
  • The paper of Bennett on themrodynamiocs of computation gives many dynamical systems which are effectively equivalent to TMs (one can simulate the other). eg Billiard systems, Brownian Computers, etc.. so since for Tms the Halting Theorem holds, a variation (with appropriate interpreation holds for these dynamical systems as well) – Nikos M. May 26 '14 at 13:58
  • Regarding the link on dynamical systems as computers, an interesting connection is the IFS (iterated function systems) and RFS (recursive function systems) of Barnsley, which (can) compute discrete problems embedded into a continuous domain (by continuosly varyaing a parameter, a fractal or grammar can be implemented), also known as Collage Theorem – Nikos M. May 26 '14 at 14:02
  • There are problems in Hamiltonian systems, on whether a hamiltomnian system is integrable (first posed by Poincare if i am not mistaken) and which are still open, although the KAM theory (and extensions) give useful results – Nikos M. May 26 '14 at 14:07
1

There is a simple thought problem that is sometimes used as an introduction to non-conventional computing paradigms:

You have two light bulbs and their respective on-off switches. Someone opens and closes both lights one after the other. How do you determine which one was closed first and which one was closed last? Determine the minimal number of times you will need to open the lights to decide this problem.

Most computer scientists usually try to find some boolean logic-based solution. The answer is (at least one of them): by touching the light bulbs and seeing which one is hotter.

Heat-based paradigms exists in computer science: simulated annealing is an known algorithm (D-waves quantum computer is the quantum counterpart of the algorithm).

Now is there a relation with the Halting problem?

The classic work of Chaitin and Calude on the Halting problem via the concept of Omega numbers can be linked to the probabilistic formulation of the Halting problem. It is the more recent treatise on the problem that I can think of... and no clear relation with entropy (thermodynamic). Now if information entropy (in the sense of Shannon) is good with you, the Omega number encodes in the most succinct way the Halting problem, in the sense of a Shannon bound.

In short, an Omega number is the probability that a random program halts. Knowing the constant would allow the enumeration of all valid mathematical statements (truths, axioms, etc.) and is uncomputable. Calude computed a version of Omega by changing the uniform probability measure with a measure inversely proportional to a random program's length and by using prefix-free encodings.So we could speak of Chaitin's Omega and Calude's Omega.

user13675
  • 1,614
  • 12
  • 16
  • Nice answer, the part related to the heat of the light bulbs is used many times as the link between information entropy and thermodynamic entropy (is a sense contrary to Jaynes' view as subjective uncertainty). my own line of thought would be to base the reasoning on the circularity of both constructs and by a (clever?) cascade one with the other create an implication (at least in one way) – Nikos M. Jun 08 '14 at 11:11
  • A similar reasoning is used with batteries (instead of light bulbs) to determine which batteries are discharged... – Nikos M. Jun 08 '14 at 11:13
0

Yes!, strangely enough I have thought about this.. Here is the idea:

First Step

Model the Maxwell's Demon as a computer program. Then, How does Demon came to know particle's speed and position before opening the door for selection?

Suppose that the demon can't measure the speed at which particles hit the door, why? because that would change particle's speeds, so demon have to figure out before open it, without looking, without measuring. To be fair we will let the demon know the rules of the game in advance, i.e. feed the demon with laws of motion, interactions of particles, and initial conditions, enough of the physics/dynamic model.

Second Step

Now model the gas of particles also as a computer program that is running same code given to the demon for every particle, so the gas is computing a result from its initial conditions, the Demon doesn't know that result until it halt (if ever): namely "a particle with the right speed is at the door", the decision yes/no question we are asking to the system is "Have a particle the right position and enough speed?", if so, the door could be opened and the fast particle can go into the high temperature side of the room setting new initial conditions (will those consecutive problems has an answer? or will run forever?)

There will be a time when there is no particle with enough speed to cross the boundary, so, there will be a time when the code will run forever (do not halt) for almost any given threshold.

Demon wants to know the result that is computed by the gas, but the result is in a sense involved potentially within the source code of the particle's laws plus the initial conditions.. of course we need to run that program to know it. If Demon run the same program waiting for the right speed at the output, the program could halt or it could run forever (but we suppose also that demon has no more computational power than the gas, so it won't be able to decide the door opening on time ).

Daemon could try to figure out the program output (or if it will halt) by watching the source and inputs without running it but it's like trying to solve the Halting Problem, why? because Demon doesn't know what laws and initial condition will be feed so Demon should be prepared to solve for any set of law and initial conditions, and we know it's not possible in general, it will need an oracle, if it could it will be enough to build a demon to generate energy from nothing. (even knowing the laws and initial condition, both things already enough hard to know)

This thought experiment can link how a reduction in entropy, by mean of computers, could in some way bounded by Halting Problem, as a problem to anticipate in general the outcomes.

(Sometime all limits seems to be the same limit..)

More about Particle Laws

Particle's laws are not the main issue of this thought experiment, those laws could be quantum or classical, but we must have into account the fact of complexity of laws and initial conditions, the complexity of the arrange of particles is not bounded, and it could have a lot of added complexity (in an extreme example of initial conditions you could even insert a whole computer firing particles according to an internal source code and give that code to the daemon).

Hernan_eche
  • 703
  • 1
  • 7
  • 23
  • 1
    I don't understand the link to the halting problem. First, you seem to have redefined what it means for a machine to halt. Second, you only seem to have one program (the gas particle simulator). It's perfectly possible to prove that one fixed program does or does not halt, without violating the undecidability of the general halting problem. – David Richerby Jun 18 '14 at 16:36
  • About halt, It didn't redefine halting, here the program halt is, as always, when the program end computing and you get an output, so here the output is defined as the exact moment that a particle with the right speed hit the door,and you could build a door that detect it, so it will mark when the program halts (then the program run again from these initial conditions for another output). Daemon wants to know when it will halt, but it can't know even if it will halt. – Hernan_eche Jun 18 '14 at 17:46
  • 1
    Turing machines cannot decide the halting problem for Turing machines. It seems that you've redefined the halting problem as, "Does one of these gas molecules ever do X?", which is a completely different problem from "Does this Turing machine halt when started with this input?" Turing's proof of the undecidability of the Turing machine halting problem says nothing about whether a Turing machine could compute whether some gas molecule will ever do X. – David Richerby Jun 18 '14 at 17:54
  • David's comment is correct, as is, it is not related directly to the halting problem. However it is an argument that follows the spirit of the question – Nikos M. Jun 18 '14 at 18:00
  • Forget about molecules, if you start saying, "this molecules and its law are an implementation of a TM", then just say a TM, so, initial conditions are inputs. So you get same question "Does this TM halt when started with this input?", daemon can not analyse laws algorithmically to answer that!. For the second part, that you put in the first comment, yes you will need a general halting problem, so please don't suppose you already know that particle laws!, I am talking about input that particle laws to a TM (the daemon), to tell it to decide if it will halt or not. – Hernan_eche Jun 18 '14 at 18:00
  • If one considers a particle as a TM, then the halting problem takes another answer. It never halts (since by Heisenberg uncertainty or thermal fluctuations) particles are in constant motion. Then the halting problem is not un-decidable. Also a particle by itself does not account for every interaction for the kinetics (since other particles take part as well). – Nikos M. Jun 18 '14 at 18:11
  • The proposed TM is not a single particle, but the whole system of particles and the door, I don't mean 'halt' as stop moving, computer is the gas and the door that instead to give answer in a monitor it ends hitting a particle against the intermediate micro door at an enough high speed, and that's the program output, so the daemon, even knowing the rules of this TM and the initial conditions, can't decide if it will halt, why? because we also suppose that daemon is a TM too. So reduction in entropy (by mean of computers) could in some way bounded by Halting Problem – Hernan_eche Jun 18 '14 at 18:39
  • In what sense are the bouncing gas molecules a computer? What is a computation? What is an input to that computation? What is the initial state of the machine? How is that reproducible? How are you going to produce a universal machine for this model of computation? – David Richerby Jun 19 '14 at 18:51
  • @DavidRicherby One question at a time!, well, "bouncing molecules" that's just a single option of initial condition (and laws) among infinitely many others "bouncing molecules" is just a particular state of matter, I say gas as general, but you can mix solids too, which law or initial condition is not central for the thought experiment, those initial conditions you mention are only an example input to feed the daemon, in that case, daemon would have to realize (from source) that they will continue bouncing and if one fast will hit the door. – Hernan_eche Jun 19 '14 at 19:26
  • To see what I'm trying to say, please focus on one algorithm (the demon) trying to figure out result of another algorithm! (the matter both side, be it gas or whatever in thermal equilibrium), we don't know how is matter distributed there (could be even the same distribution both sides) we only know about a thermal equilibrium, and to take energy from there we need to know about microstates, i.e. individual particles speed, position. Please focus in the part of two algorithms one running, other trying to predict result without knowing more than source and input. – Hernan_eche Jun 19 '14 at 19:30
  • @Hernan_eche You can't focus on just one algorithm because it's perfectly possible to prove that a single algorithm terminates (example: merge sort provably terminates within $kn\log n$ steps for any input of length $n$). To link your box of gas molecules to the halting problem, you need first to demonstrate how it constitutes an algorithm at all. Then, you need to demonstrate that it constitutes an algorithm in a class of algorithms rich enough to be unable to decide its own halting problem. – David Richerby Jun 19 '14 at 19:37
  • Ok, first part I think you are confusing "an already known algorithm" like "merge sort", with an "unknown algorithm" and that is what demon will receive as input. In simple words, merge sort was analysed by humans, in history, now we know how to prove that will terminate in that steps, but there is no general algorithm where you input a source and it decides when any unknown algorithm will end!, of course that "if demon have a list of all possible algorithms that halt" will do, but that list can't exist, exactly because of the Halting Problem, we can't never be sure about a new algorithm.. – Hernan_eche Jun 19 '14 at 19:42
  • Second part, about the box of gas molecules, how it constitutes an algorithm, by definition, because is a though experiment, and you can think that even if there were a known algorithm law for the particles, then input it to demon, it can't solve it in general because of Halting Problem, so that's why I think this links any deterministic model of entropy with the halting problem (by the way thanks for the comments) – Hernan_eche Jun 19 '14 at 19:46
  • Something doesn't become an algorithm just because you say "I define it to be an algorithm." – David Richerby Jun 19 '14 at 19:48
  • I don't get that, perhaps you are thinking to use particles for doing the computation itself as if those were logic gates, memory, but that's not the model, the algorithm of particles is about its motion and initial state, then to say a particle system can be modelled by an algorithm, is nothing new! there are already a lot of computational model of particles, that are indeed algorithms, here I say it can be any of these models and even more, can be those or new ones. – Hernan_eche Jun 19 '14 at 20:04
  • As a genaral idea to try to link Halting problem with a thermodynamic system, it is interesting (maybe even a path to an answer). Still David's comments are correct, it is not clear how it is connected to a TM or to the halting problem itself (in terms of TMs, that approaches Turing's formulation) – Nikos M. Jun 21 '14 at 12:13
  • Turings formulation of the halting problem is related to the fact that class of all TMs is not recursive (there are equivalent formulations) and this is based on a circular argument (which resembles the thermodynamic argument?) – Nikos M. Jun 21 '14 at 12:15
  • Presumably using a coarse graining argument (akin to thermodynamics/stat mechanics) may produce a connection – Nikos M. Jun 21 '14 at 12:18
  • One way to avoid David's comment is to show that TM(s) used in the gedanken experiment can run an arbitrary algorithm (in a general sense) or equivalently that a TM can solve the general halting problem (and not a specific one) – Nikos M. Jun 21 '14 at 12:41
  • Finally the chinese room and Turing test concepts can be related to this question (imo as it relates to the specific answer) – Nikos M. Jun 21 '14 at 12:47
  • Symbolic Dynamics (possibly along with coarse graining) define a way to map a smooth/continuous space into symbolic/string sequences (suitable for e.g TMs) – Nikos M. Jun 21 '14 at 12:52
  • One argument could be like: given the symbolic dynamics of an arbitrary thermodynamic gas in a chamber, use these as representing or encoding both the inputs and the program for a TM, then this may more easily be related to Chaitin incompleteness, than the halting of a TM (a-la Turing), although they are equivalent (interestingly this would provide a link between algorithmic randomness and stochastic randomness!) – Nikos M. Jun 21 '14 at 13:01
  • Moderator note: comments aren't really appropriate for an extended conversation; they may be deleted at any time. I suggest that you continue you conversation (if you so wish) in chat (you can use the Computer Science room). – Gilles 'SO- stop being evil' Jun 21 '14 at 18:00
  • 1
    @Gilles, thanks for noting that, i agree with it, if needed a chat will be created. i would prefer if these comments were left nevertheless since they relate both to the question and the specific answer (as evolved) – Nikos M. Jun 22 '14 at 11:11
0

Very captivating question indeed, and we will see that your thinking IS correct.

First let's see what the second principle of thermodynamics says.

The entropy function is used in the 2nd law of thermodynamics. It stems from Carnot's theorem which states that processes taking places in steam machines have an efficiency lower or at best equal to the corresponding "reversible" machine (which by the way seems like an unstable concept over the 150 years of thermodynamics). Carnot did not coin the entropy function himself, but together with Clausius this is what they say:

As there is no perpetuum machine, then we can build a function S called entropy which constrains macroscopic thermodynamic measures into a certain equation, namely that S(V, T, P, etc.) = 0

Note that this equation is nothing but the equation of a hyper-surface in the space of thermodynamic measures.

Enters Carathéodory.

Carathéodory is a German mathematician and like all mathematicians he wants to extract out of Carnot's and Clausius reasoning some axioms which would allow him to clarify what the second law really is about. Put bluntly he wants to purify thermodynamics to know exactly what entropy is.

After listing a certain number of axioms, he manages to formulate HIS second law, which says (more or less):

There ARE some adiabatic processes. Or more prosaically, if you want to return, sometimes work alone is not enough. You need a bit of heat.

Now that seems VERY different from the formulation of Clausius! But in fact it it not. All Carathéodory did was to change the orders of the words, a bit like mathematicians played with Euclide's 5th axiom for 2,000 years and produced many different wording for that axiom. And if you take a step back you should not be too surprised by Carathéodory's statement of the second law. In fact Carathéodory's leads to the exact same entropy function and hyper-surface equation S(V, T, P, etc.) = 0

Think hard on Carnot's theorem. As a mathematician, you should not be too satisfied of the way Carnot's admits perpetuum machines do not exist. In fact, as a mathematician you would rather see something like this:

There is an entropy function S which constrains macroscopic measures IF AND ONLY IF there is no perpetuum machines".

NOW you have a theorem. And what does it say? That as long as there is no isolated mechanical system which produces an infinite amount of energy and hence could lead you to any state you want, then you will find an entropy function. An isolated mechanical system is an adiabatic process. Hence Carathéodory's formulation: no adiabatic system can lead you anywhere. Sometimes you will need some heat.

So not only we are sure that Carathéodory's is correct, but also that his formulation is pretty simple.

Now where do you get the impression that the second law à la Carathéodory is similar to the halting problem?

Take a step back on Carathéodory's statement. All it says is that once you have an isolated mechanical system which you stop mingling with, you cannot reach any state you want.

Doesn't that sound PRECISELY like the halting problem? I.e. once you have written all axioms of your theory and laid down all possible transitions, there will be problems which you cannot solve. Sometimes, you will need to add more axioms.

In fact if you want to go really deep and encode Carathéodory's formulation, this will result in the same code as the halting problem with adiabatic processes instead of Turing machines, and states instead of problems.

What do you think?

NOTE: I edited my answer almost entirely so comments below won't be in line with what it contains now.

Jerome
  • 109
  • 7
  • 2
    "Rice states that no Turing machines can indefinitely produce a non-trivial property." That's not a paraphrase of Rice that I recognise. What do you mean? – David Richerby Aug 04 '16 at 19:24
  • 2
    What do you mean by "infinitely produce a non-trivial property"? – David Richerby Aug 05 '16 at 07:47
  • A bit twisted. Rice says that it cannot be proven that a TM implements a given function. Now if a TM A produces indefinitely a non-trivial property (N-TP) it means it produces a N-TP for ANY entry. How can that be true in practice? Well it seems the only way for that to be true is to consider an undefined entry e and show that its A(e) has a N-TP. In turns that would mean that we would manage to PROVE that the machine produces a N-TP. And we know that's not possible. So in effect I postulate that it is equivalent to say "A produces indefinitely a N-TP" and "I CAN SHOW that A produces a N-TP" – Jerome Aug 05 '16 at 07:55
  • "Infinitely produce a non-trivial property" means that you can throw an infinite number of distinct entries to the TM. And all the outputs will have the NT-P – Jerome Aug 05 '16 at 07:56
  • What do you mean by "throwing an entry to the TM"? Giving it some string as an input? All Turing machines can receive infinitely many distinct inputs: the input is, by definition, a string of any finite length. – David Richerby Aug 05 '16 at 08:05
  • Yes that's the theoretical TM. And that's why no TM can produce a N-TP and this is what I mean by no TM can INFINITELY produce an N-TP, it's just an emphasis. But that emphasis is really paramount in this context as entropy imposes that we discuss REAL physical machines. Hence TM reduced to finite computers. So if you start from steam machines you really are starting from finite memory TM. Entropy issues then arise when you extend to infinite numbers of entries - to match Carnot's perpetuum machines. – Jerome Aug 05 '16 at 08:43
  • 2
    OK. I think your answer would be much clearer if you just used standard terms, instead of inventing things like "infinitely produce a non-trivial property" to mean "be able to process an infinite number of inputs." It would also help to explain what aspect of your "real" Turing machine is unable to process an infinite number of inputs. Is it that the tape is finite, for example? – David Richerby Aug 05 '16 at 09:20
  • Yes I guess. I have never studied TM in details so I am quite far in terms of what words can be used or not. I know that for example you can solve the halting problem for finite memory TM so I thought that was quite a common concept. That's the TM I am referring to when I start the reasoning about TM. – Jerome Aug 05 '16 at 09:38