61

Final edit: I think I pretty much understand now (touch wood)! But there's one thing I don't get. What's the physical reason for expecting the correlation functions to be independent of the cutoff? I.e. why couldn't we just plump for one "master Lagrangian" at the Planck scale and only do our integration up to that point?

  • Perhaps it has something to do with low energy experiments not being influenced by Planck scale physics.
  • Maybe it's because there isn't any fundamental scale, i.e. that $\Lambda$ must be arbitrary in a QFT approximation, for some reason.

I'll award the bounty to anyone who can explain this final conundrum! Cheers!

$$***$$

Apologies if this question is too philosophical and vague! I've been thinking about QFTs and continuum mechanics, and reading about their interpretation as effective theories. In these theories we have natural cutoffs at high momentum (small scales). We make the assumption ($\star$) that the large scale physics is decoupled from the small-scale. Therefore we hope that our predictions are independent of the cutoff (after some renormalization if necessary).

Why is the assumption ($\star$) so reasonable? I guess it seems observationally correct, which is powerful empirical evidence. But could it not be the case that the small scale physics had ramifications for larger scale observations? In other words, would it be reasonable to expect that the predictions of a TOE might depend on some (Planck scale) cutoff?

This question may be completely trivial, or simply ridiculous. Sorry if so! I'm just trying to get a real feel for the landscape.

Edit: I'd like to understand this physically from the purely QFT perspective, without resorting to analogy with statistical physics. It might help if I rephrase my question as follows.

In the Wilsonian treatment of renormalization we get a flow of Lagrangians as the energy scale $\Lambda$ changes. For a renormalizable theory we assume that there's a bare Lagrangian independent of $\Lambda$ in the limit $\Lambda \to \infty$. We calculate with this quantity, by splitting it into physical terms and counterterms. I think these counterterms come from moving down the group flow, but I'm not quite sure...

But why do we care about (and calculate with) the bare Lagrangian, rather than one at some prescribed (high) energy scale (say the Planck scale)? I don't really understand the point of there existing a $\Lambda\to \infty$ limit.

2 Answers2

47

This is a very interesting question which is usually overlooked. First of all, saying that "large scale physics is decoupled from the small-scale" is somewhat misleading, as indeed the renormalization group (RG) [in the Wilsonian sense, the only one I will use] tells us how to relate the small scale to the large scale ! But usually what people mean by that is that if there exists a fixed-point in the RG flow, then some infrared (IR) [large scale] physics is independent of the details at small scale [ultraviolet (UV)], that is it is universal. For instance, the behavior of the correlation functions at long distance is independent of the bare parameters (to fix the setting, say a scalar field with bare parameters $r_\Lambda, g_\Lambda$ for the quadratic and quartic interaction and $\Lambda$ is the (for now) finite UV cut-off).

But one should not forget that a lot of physical quantities are non-universal. For example, the critical value of $r_\Lambda$ (at fixed $g_\Lambda$ and $\Lambda$) to be at the critical point is not universal. And this is a physical quantity in condensed-matter/stat-phys, the same way that $\Lambda$ also has a physical meaning.

The point of view of the old-school RG (with conterterms and all that) is useful for practical calculations (beyond one-loop), but make everything much less clear. In the spirit of high-energy physics with a QFT of everything (i.e. not an effective theory), one does not want a cut-off, because it has no meaning, the theory is supposed to work at arbitrary high-energy. This mean that we should send $\Lambda$ to infinity. And here comes another non-trivial question : what do we mean by $\Lambda\to\infty$ ?

The perturbative answer to that is : being able to send $\Lambda\to\infty$ order by order in perturbation in $g$. But is it the whole answer to the question ? Not really. When we say that we want $\Lambda\to\infty$, it means that we want to define a QFT, at a non-perturbative level, which is valid at all distance, and we want this QFT to be well-defined, that is defined by a finite number of parameters (say two or three). And in fact, this non-perturbative infinite cut-off limit (that I will call the continuum limit) is much more difficult to take. Indeed, having a theory described in the limit $\Lambda\to\infty$ by a finite number of parameter means that the RG flows in the UV to a fixed point. In the same way, the RG has to flow in the IR to another fixed point in order to be well controlled. This implies that very few QFTs in fact exist in the continuum limit, and that some QFTs which are perturbatively renormalizable ($\Lambda\to\infty$ order by order in perturbation in $g$) are not necessarily well defined in the continuum limit !

For instance, some well known QFTs in dimension four (such as scalar theories or QED) do not exist in the continuum limit ! The reason is that even if these theories are controlled by a fixed point in the IR (at "criticality", which for QED means at least electrons with zero masses), it is not the case in the UV, as the interaction grows with the cut-off. Therefore one has to specify the value an infinite number of coupling constants (even "non-renormalizable") to precisely select one RG trajectory.

One of the QFTs which exists in the continuum limit is the scalar theory in dimension less that four (say three). In that case, at criticality, there exists one trajectory which is controlled by a fixed point in the UV (the gaussian fixed point) and in the IR (the Wilson-Fisher fixed point). All (!) the other trajectories are either not well defined in the UV (critical theories but with otherwise arbitrary coupling constants) or in the IR (not a critical theory). One then sees why this $\Lambda\to\infty$ limit is less and less seen as important in the modern approach to (effective) QFTs. Unless one wants to describe the physics at all scale by a QFT, without using a fancy up-to-now-unknown theory at energies above $\Lambda$. Nevertheless, this idea of controlling a QFT both in the IR and the UV is important if you want to prove that General relativity is (non-perturbatively) renormalizable (i.e. can be described at all scales by few parameters) in the asymptotic safety scenario : if there is a non trivial UV fixed point, then there exists a trajectory from this fixed point to the gaussian fixed point (which is, I think, Einstein gravity), and you can take the continuum limit, even though the perturbative $\Lambda\to\infty$ does not exists.

Reference : Most of this is inspired by my reading of the very nice introduction to the non-perturbative RG given in arXiv 0702.365, and especially by the section 2.6 "Perturbative renormalizability, RG flows, continuum limit, asymptotic freedom and all that".

Adam
  • 11,843
  • 2
    Many thanks for your detailed answer! So am I right to think that the usual renormalization procedure is pointless then? Rather one should just calculate with an effective Lagrangian with a cutoff at some experimental scale $\mu$. Then the answers will automatically depend on $\mu$ but that's okay because the coupling constants do? I feel like there's still something wrong with my reasoning there. I don't understand what exactly you're meant to calculate with in the Wilsonian picture. Any ideas? – Edward Hughes Sep 30 '13 at 08:16
  • If by usual you mean "old-school", no, it's not useless. It does not help on the conceptual level, but it is really useful for calculations. The reason is that in this approach, you don't need to take care of the infinite number of coupling constant that "exist" in the Wilsonian skim. (The old-school scheme corresponds to projecting all RG trajectories that start close enough to the gaussian fixed point to the only trajectory that relates gaussian and Wilson-Fisher fixed point, see the discussion in the ref I gave.) – Adam Oct 01 '13 at 03:12
  • Also, in the Wilsonian RG, the action (or lagrangian) at scale $\mu$ is not physical by itself. Only some quantities that can be extracted from the RG (such as critical exponents) are physical. That one of the main shortcoming of this approach (but see the non-perturbative RG, wilsonian in spirit, which allows to compute physical quantities such that thermodynamics or correlation functions). On the other hand, the "old-school" RG computes physical quantities : for instance $g(\mu)$ is a vertex function at some specified momentum being equal to $\mu$. This is measurable and physical. – Adam Oct 01 '13 at 03:15
  • 1
    Right - so is the following reasoning correct? If you could just confirm this for me I'll happily award the bounty! We want our physical quantities (e.g. amplitudes) to be independent of cutoffs. Why? Because otherwise we could get information about small scale physics by doing large scale experiments. Is this correct? I don't quite see exactly why that would be the case. I'm pretty sure that this is just the definition of an EFT now, but I'd like to have a physical feeling about why cutoff independence is good. Thanks in advance! – Edward Hughes Oct 03 '13 at 13:33
  • People use to want everything independent of the cut-off, and sadly it is still presented that way in most text books. You only want that if you think that your theory is the ultimate theory that will describe all phenomena at all energies. But usually, you don't want that (all cond-mat/stat-phys, low energy QCD, Fermi theory of weak interactions...), and you're happy there is a (physical) cut-off. But of course, having a 'renormalizable theory' (keeping only the most relevant interactions) is very useful technically, as it makes everything simpler (cut-offs make calculations complicated). – Adam Oct 03 '13 at 22:27
  • To say it otherwise : using renormalizable theories (i.e. independent of the cut-off) means that we want to forget about the small scale physics, that we don't really care about that information (info that exists, unless you want the cut-off to be unphysical and thus truly infinite). We then focus on universal quantities but we miss some information. Always keep in mind that we don't need to do that (and sometimes we don't want to). The problem of the 'unphysical cut-off' is largely due to the high energy physics point of view, which unfortunately dominates in the teaching of QFT... – Adam Oct 03 '13 at 22:40
  • Okay that makes sense. But I'm still stuck on my final question above. In case it's not clear, here's a rephrasing. Why would it be wrong to have a theory where the correlation functions explicitly depended on some cutoff $\Lambda_0$. Indeed in the Wilsonian treatment we consider that cutoff "physical". Perhaps that would give an unpredictive theory though, because you'd be able to integrate out modes down to a lower cutoff, and the results would depend on that cutoff too... This is what I'm struggling with now - why do all the correlation functions have to be independent of the cutoff? – Edward Hughes Oct 04 '13 at 12:00
  • Actually is this what you are saying in your last comment? That the cutoff independence is really just an approximation that people make in QFT? I suppose after all everything in string theory is dependent on the string scale... I guess what I don't understand is why: assuming cutoff independence $\Leftrightarrow$ assuming low energy physics is independent of high energy physics. – Edward Hughes Oct 04 '13 at 12:04
  • (Btw - have awarded you the bounty already because of all your help! Many thanks!) – Edward Hughes Oct 04 '13 at 12:06
  • If you take a more or less random initial action at cut-off $\Lambda$ (by that I mean that you choose the initial parameters arbitrarily), you should expect that the correlation functions depend on the cut-off, at least at high energy of order $\Lambda$. Now, if the theory starts close enough to the Gaussian fixed point, most of the interactions will go to zero, and you can focus on few of them ($r$ and $g$, the renormalizable interactions), when you have integrated out all modes above, say, $\Lambda_1$. The resulting theory with cut-off $\Lambda_1$ is 'renormalizable'... – Adam Oct 04 '13 at 15:03
  • and you can compute all correlation functions in a way that is independent of $\Lambda_1$, as long as you focus on energy much smaller than $\Lambda_1$ (see "large river effect" in the ref I gave). But in this procedure, you have lost all information about the physics between $\Lambda$ and $\Lambda_1$ (but if you don't care about it, it's ok). You should also note that this is possible because we start close to the Gaussian. If not, you can not forget about the high energies that easily (that's the problem of quantum gravity in the asymptotic safety scenario). – Adam Oct 04 '13 at 15:07
  • So possibly the best way to look at it is just to say that the independence of the cutoff is just some natural property of a quantum field theory, without a physical interpretation then? In other words, what I was looking for is the empty set? I still feel like there should be a simple reason for saying: "hooray - my answer is independent of the high-energy cutoff". I understand exactly why they are in quantum field theory now (via the large river effect etc.) I just don't get philosophically why everyone is so happy about it! – Edward Hughes Oct 04 '13 at 17:22
  • I suppose the following gets to the nub of my problem. People say "Aha - the cutoff is arbitrary, so we can integrate out high momentum modes and get a renormalization group". They then calculate: "We now have the CS equation for the running of the couplings at different scales". They then say: "this running means that for low energies things are described by renormalizable theories." Any finally, "hooray - all my answers are physical and independent of renormalization scale. So that must be arbitrary". The logic is completely circular. At some point you need a reason for the first statement. – Edward Hughes Oct 04 '13 at 17:28
  • The 'unphysical cut-off' point of view is the old-school approach (as opposed to the EFT modern view), when people thought that QED had to be fundamental theory with by definition no cut-off. People were happy because they could say "I have the theory of everything of QED !". But nobody thinks nowadays that you should look for that; no one should look for renormalizable theories, as these theories just mean : theories with only the relevant interactions at low energies (very small compare to $\Lambda$). This is why it is interesting for critical phenomena, because then you are interested... – Adam Oct 05 '13 at 02:51
  • in the very long distance physics. But it doesn't mean by any way that these renormalizable theories are what you should study if you're interested in, say, the Ising model at an arbitrary temperature. In that case, the interactions are very complicated (non-analytical in the field). But if you just want to know about the critical (universal) behavior, you can forget about it, keep only $r$ and $r$, and send $\Lambda$ to infinity. That makes everything much simpler. If you want all the non-universal feature, you need to keep all the details, and the good old RG won't work. Is it clearer ? – Adam Oct 05 '13 at 02:51
  • Right. So I think I get it. Assume there is some physical cutoff $\Lambda$. Now by construction of a QFT one can integrate out the high energy modes and get an equivalent Lagrangian at lower scale $\mu$. Since $\Lambda$ now doesn't appear in the problem, the Lagrangian coefficients must have absorbed the dependence on $\Lambda$. To compute processes at energies lower than $\mu$ you can use either the original Lagrangian $L(\Lambda)$ or the 'effective' one $L(\mu)$. Anything you calculate must be independent of $\Lambda$, assuming you fix the constants in $L(\mu)$ independently of... – Edward Hughes Oct 05 '13 at 10:14
  • $\Lambda$. But now our choice of $\mu$ was arbitrary, so nothing physical can depend on that either. In particular correlation functions can only depend on coupling constants in a way that accounts for the fact that we're essentially dealing with an equivalence class of Lagrangians. Hence the running couplings. Equivalently you can deduce the running of the couplings directly from the 'integrating out high momentum modes' prescription of Wilson. Would you agree with this viewpoint? So essentially we're lucky that QFTs are exactly constructed to be independent of cutoffs. We don't have... – Edward Hughes Oct 05 '13 at 10:19
  • to impose this at all! In reality this is just an approximation, because we might expect behaviour at high energies to depend on the cutoff in some physical way. This manifests itself in the fact that gravity is 'non-renormalizable' in the traditional sense of the term. Please let me know if you agree with this, then I can finally put my concerns about renormalization behind me! – Edward Hughes Oct 05 '13 at 10:21
  • Yes, I think you got it :-) One last point : we're not lucky, this is due to the perturbation theory, which exists if the RG starts close enough to the Gaussian. But this is not necessarily the case (spin lattice models are usually non-perturbative). Sending the cut-off perturbatively to infinity is allowed by the large river effect (in $d<4$), and by the fact that the flow is logarithmic in $d=4$. So you can say that we are lucky that in HEP (and more generally in $d=4$), most theories are dominated by the gaussian (not the case for low energy QCD and GR). – Adam Oct 05 '13 at 15:19
  • But in a sense we are lucky that it's even possible to have a notion of RG flow that starts close to the Gaussian. You could envisage a different type of theory in which the the notion of "high momentum integration" didn't exist, so you really had to understand the high energy behaviour. In fact this is presumably what happens in string theory and LQG. So my argument is that we are "lucky" to have discovered the QFT approximation before string theory or LQG or whatever other fundamental theory is demonstrated to be valid. Or "unlucky" I suppose, depending on your view of the history! – Edward Hughes Oct 06 '13 at 15:08
  • +1: Very informative. Thanks especially for the reference. – joshphysics Jan 12 '14 at 22:45
7

At every stage of renormalization, the Hamiltonian changes $\mathcal{H} \rightarrow \mathcal{H}_{\textrm{ren.1}}\rightarrow \mathcal{H}_{\textrm{ren.2}} \rightarrow \ldots$; in the process, energy modes and length scales are excluded as you say. But the point is that every $\mathcal{H}, \mathcal{H}_{\textrm{ren.1}}, \mathcal{H}_{\textrm{ren.2}}, \ldots$ (including the 'original' $\mathcal{H}$) is an effective or emergent theory applicable only within its domain $\Omega, \Omega_{\textrm{ren.1}}, \Omega_{\textrm{ren.2}}, \ldots$. That is, there being no fundamental theories even in particle physics was a key point stressed by K. G. Wilson. Therefore, for instance in field theories, the bare electron mass $m$ becomes simply a mathematical construct; the true one as measured and measurable is the renomalized value $m^*$.

As regards the decoupling, I'll take this from the point of view of critical phenomena. At this critical point where there are correlations across the entire system, the lattice spacing does not matter as we well know; therefore, its the long wavelength modes that stretch across the system that contribute most. Clearly, the decoupling of length scales is justified in such a situation; because QFT and statistical mechanics are essentially equivalent via Feynman's path integral notation, the decoupling is justified in renormalizable field theories. If anyone can make this mathematically rigorous, please feel free ...

As an analogy, think of a classical system with many configurations $i$ with energies $\epsilon_i$; depending on the temperature $T$, the contribution of a configuration will be largely decided by its Boltzmann weights $e^{-\epsilon_i/k_BT}$. In which case, we may discard all other contributions or modes that have negligible weights.