7

I was reading this Phys.SE answer written by user346. At the end of point 3, they say they've only made a change of canonical variables from the ADM formalism to get the Ashtekar formalism. Then point no. 4 is about applying the standard Dirac quantisation on this theory. We end up with a Hilbert space of spin-networks. The discretization of spacetime is obtained as a consequence, so it's not an assumption. Point no.5 is about spin foams, which correspond to histories of spin networks. It seems like this is just the usual path integral formalism applied to the Hilbert space of spin networks.

My question is, did this theory quantise general relativity without any additional assumptions? It merely did a change of variables from the metric to the connection. Then why is it that the naive path integral quantisation of gravity is non-renormalizable, but now it suddenly works after merely a change of canonical variables?

Ryder Rude
  • 6,312
  • 1
  • @Andrew Thanks. It says that the classical limit of a QFT may be wrong because of some analogue of phase transitions. BUT I think Ehrenfest's theorem is bound to hold given that we've canonically quantised? Also, phase transitions depend on a parameter (like the inverse temperature). Then LQG could give the wrong classical limit for some values of the parameters but reproduce GR for some other values, right? – Ryder Rude Feb 07 '23 at 15:30
  • As I understand this isn't a problem that can be handwaved away. The quantization is done in such a way that it's not obvious how to take the classical limit, and there are no papers that show rigorously how to recover GR in some limit in a completely convincing way. – Andrew Feb 07 '23 at 15:32
  • @Andrew I think if they've canonically quantised, they could use Ehrenfest's theorem to recover the classical equations in terms of Ashtekar variables, and then switch back from Ashtekar variables to the metric to recover GR. This is how I understood the classical limit of quantum theories. I may be missing something.. – Ryder Rude Feb 07 '23 at 15:35
  • Like I said, I'm not an expert, but I don't think it is as simple as what you are saying. Section 1 of the third paper I linked above might be of interest: https://arxiv.org/abs/hep-th/0501114 – Andrew Feb 07 '23 at 15:37
  • To put it another way, if you could write up the details of your Ehrenfest theorem argument and show that LQG reproduced GR in an appropriate limit, as I understand this would be a very important paper in the field. – Andrew Feb 07 '23 at 16:12
  • @Andrew idk what you're implying.. It must be way outside my limits lol. I'm getting the gist of it. They've constructed a Hilbert space-ish thing in some freak way, which makes things like Eherenfest's theorem and minimum uncertainty Gaussian states states non-trivial to obtain. So the classical limit is much more complicated than usual QFT. – Ryder Rude Feb 07 '23 at 16:19
  • I'm not implying anything -- I'm just saying that to date the literature doesn't have an unambiguous demonstration that LQG reproduces GR and it seems to be an open question. So if you have a way of answering this question, then it would be a major advance. But, if you think you do have a way to answer it, it's worth making sure you understand why previous attempts have failed and what is new in your approach. – Andrew Feb 07 '23 at 17:19

2 Answers2

4

The issue is a bit more subtle:

  • Canonical quantization generally gives different results when you use different sets of classical canonical coordinates. For example, classical-mechanical systems will often produce different quantum systems when canonical positions and momenta are non-trivial functions of true positions and momenta. So picking a set of variables for canonical quantization is a non-trivial matter.

  • Ashtekar variables simply represent variables where the quantization is achievable at all (the Hamiltonian constraint becomes polynomial in them). However, they do so by essentially allowing for complex metrics. For example, if you solve the classical equations corresponding to Ashtekar variables, there is no simple way to see whether the result is going to correspond to a real physical metric. Note that this is not the same as having a complex wavefunction in particle quantum mechanics, it is like allowing the particle to be at complex points in space, it enlarges the allowed dynamics considerably. Ultimately, this causes issues in LQG as well. So I would not call this quantization entirely "unadulterated".

  • The non-renormalizability of quantum gravity refers to the perturbative expansion of the metric $g_{\mu\nu} = g_{(0)\mu\nu} + h_{\mu\nu}$ where the fundamental quantized field is $h_{\mu\nu}$, the deviation from the background metric $g_{(0)\mu\nu}$ (which is not quantized). This allows for the perturbative computation of the effective action using a path integral approach or similar. In this procedure, you see non-renormalizable terms arising at the 2-loop level. The renormalization would require a counter-term that scales as Weyl curvature cubed. This remarkable, background-covariant result was obtained by van de Ven in 1992.

  • There is no simple counter-part to this in LQG (and related approaches). It is not clear how the semi-classical limit of LQG works (and if it works at all). Currently, we do not even know two-point functions that would be somehow computed on semi-classical backgrounds. The "naive" UV-divergent 2-loop expansion around a classical background described by van de Ven should have a counter-part in LQG. In return, this counterpart should provide a clear explanation of where exactly does the effective counter-term curing the UV divergence appear in the workings of LQG. Unfortunately, as far as I know, such an explanation is simply not available at the moment.

Other critical discussions of finer points of LQG were given by Nicolai et al. in 2005. Even though there surely are new developments, I do not believe that the question of the emergence of renormalizability of LQG has ever been unambiguously settled.

Void
  • 19,926
  • 1
  • 34
  • 81
  • Thanks for the answer. 1. I thought canonical transformations in the classical theory implied unitary transformations in the quantum theory, so the theories were equivalent. But I think maybe subtleties related to field theories like Haag's theorem are ruining this? 2. If we don't know how to do two-point functions in LQG, then has most of the progress of LQG been its Hilbert space construction? Can we say that this is not much progress over the other theory that quantises $h_{\mu \nu}$, as that theory also has a Hilbert space but it can't calculate predictions? – Ryder Rude Feb 07 '23 at 17:17
  • @RyderRude
    1. Yes, there are unitary transformations corresponding, but they won't necessarily be unique. It relates to the fact that the translation of polynomials $\sim x^m p^n$ into the quantum realm is non-unique due to unclear operator ordering. Haag's theorem is not an issue for LQG, since it does not start with a free theory.
    – Void Feb 08 '23 at 11:04
  • I would say that the most watertight progress of LQG lies exactly in the kinematics, and understanding the spectra of the volume operators etc. Another application of LQG are models with a small number of degrees of freedom (reduced by symmetries or by Ansatz), which leads to more computational control. This leads e.g. to loop quantum cosmology. However, they make many pragmatic and ill-transparent choices to get concrete predictions, in this it is not dissimilar to e.g. the supersymmetric extensions of the standard model or "string-theory-inspired/derived/whatnot" cosmological models.
  • – Void Feb 08 '23 at 11:10