0

The $n$-point correlation functions of QCD, which define the theory, are computed by performing functional derivatives on $Z_{QCD}[J]$, the generating functional of QCD,

$$\frac{\delta^nZ_{QCD}[J]}{\delta J(x_1)...\delta J(x_n)},$$

where $J$ are the different field sources that are to be set to zero. In a perturbative approach, we can expand on the weak gluon coupling constant, but doing so will require taking into account the presence of the loop integrals that, generally, diverge and one has to deal with these singularities that are a consequence of allowing the integrations to run over every possible energy at every point in space-time.

The first step is regularizing the theory, where a regulator is introduced in the loop integrals such that they become convergent. Doing this then allows one to redefine the gluon coupling constant, quarks masses and fields strength, that appear in the generation functional of QCD, in such a way that they cancel these infinities. This redefinition is called renormalization and by doing so we can compute the finite renormalized correlation functions.

From this perturbative approach, we also see that the coupling and mass will be dependent on the energy scale, such that the correlation functions are independent of this scale.

Now, starting again from the step of computing the correlation functions by functional derivatives on $Z_{QCD}$ but now from a non-perturbative approach (and forgetting everything about the perturbative approach), I understand that there isn't a way to do this computation and one has to use other non-perturbative methods such as lattice QCD. But where does the need to regularize and renormalize QCD come in? Everywhere I have looked for answers they always rely on the perturbative approach, using arguments such as problematic divergencies or scale-dependent mass and coupling, and I don't see how they are present in the non-perturbative approach.

The only 2 leads that in my eyes could lead to problems are the fact that the coupling and mass are bare parameters with no physical value and that the theory is defined as allowing every possible energy at every point in space-time, but I don't understand where these apparent pathologies come in.

Qmechanic
  • 201,751
orochi
  • 323
  • 2
    Lattice QCD (any lattice approach) inherently introduces a regularization simply by discretizing space and computing in a finite „box“. Roughly speaking: the lattice spacing introduces a UV regularization and the lattice extend an IR regularization. To extract results a continuum limit and an infinite volume limit have to be performed and those to involve „Non-perturbative Renormalization in Lattice Field Theory“ which basically entails how to get physical values for observable from lattice ones. I am no expert on the details but maybe this gave an idea an the correct buzzword. – N0va Aug 17 '23 at 22:38
  • 1
    just to add a bit: as Nova mentioned, the lattice is a regulator itself, and all measured quantities will be finite and well defined at fixed spacing. If you take the continuum limit (lattice spacing -> 0) naively, you'll find that your measured quantities will numerically diverge - performing renormalization of your lattice operators will ensure you obtain the correct, finite, physical, renormalized results in the continuum limit.

    You also don't have to do nonperturbative renormalization as Nova mentioned, the more naive way is just to do lattice perturbation theory to renormalize

    – QCD_IS_GOOD Aug 18 '23 at 20:28

1 Answers1

9

The Wilsonian viewpoint of renormalization (see this excellent answer by Abdelmalek Abdesselam) is not conceptually tied to perturbative expansions at all. Rather, it conceives of a quantum field theory as having an inherent scale $\Lambda$ (in the simplest case a hard momentum cutoff for Fourier modes in the path integral), and "renormalizing" is starting from a theory at scale $\Lambda$ and going to a scale $\Lambda'$ by integrating out more modes - this transformation is the renormalization (semi-)group flow.

In this framework, QFT really deals with the trajectory under the RG flow, where at each scale $\Lambda$ we have one theory, connected by renormalization to all the other theories along the trajectory. It's not that we start with one theory and then renormalize it, it's that "the theory" is really given by its versions at all scales, but because of the renormalization group equations it sufficies to give one point along the trajectory to determine the full trajectory, so we tend to think of this as "starting" at one scale and renormalizing to the others.

Now, the "scaleless" theory you mean when you talk about the non-perturbative QCD $Z_\text{QCD}$ is really the $\Lambda\to\infty$ limit of this trajectory. In practice for most realistic theories it will turn out that you cannot compute this without hitting divergences even outside of perturbation theory: The standard approach is to put the theory on the lattice with a momentum cutoff and you will typically find that the limit where the lattice spacing goes to zero and the cutoff to infinity introduces ugly divergences in the correlators: The problem is just that something like $\langle \phi(x)^2\rangle$ will always diverge unless you renormalize.

From yet another viewpoint, renormalization is simply "resolving" an ambiguity in the definition of the quantum field theory: While its concrete implementation is related to perturbation theory, the core insight of Epstein-Glaser renormalization is that something like $\phi(x)^4$ is actually ill-defined. While there is no problem with writing down something like that in a classical field theory, in quantum field theory the quantum field has to be an operator-valued distribution (see this answer of mine), not a function, and the pointwise product of distribution does in general not exist - at least not uniquely and without further specification.

In this viewpoint, all the infinities the other approaches encounter are just the price they have to pay for ignoring this fundamental flaw in the setup of the theory, and getting rid of the infinities is a post-hoc fix for this. Garbage in (an action containing ill-defined quantities), garbage out (divergences).

So here renormalization turns out to simply be what's missing to make the theory well-defined - we have to specify, for each of the pointwise products of fields in the Lagrangian, how that product is supposed to work. The renormalization parameters then arise as freedoms of choice during this specification.

In any case, both the Wilsonian and the Epstein-Glaser viewpoint agree that renormalization is neither inherently perturbative nor inherently related to "infinities" - it is simply a necessary part of what you have to consider when you really think in depth about what "a QFT" really is.

ACuriousMind
  • 124,833
  • I like the "ill-defined product" approach the most. Does this mean a good way to think of it is that quantization - i.e. the idea of trying to "turn quantum" a "classical" theory is simply insufficient to define the theory when it comes to field systems? To what extent can we fix the distributional product using the total corpus of empirical data we have today? Can we make that "approximate" product fully Lorentz symmetric? With the approximate product, can we, say, prove a bound hydrogen atom exists in fully relativistic treatment? – The_Sympathizer Aug 18 '23 at 00:13
  • That is to say, given we have the Standard Model Lagrangian, w/lots of examples of distributional products, and given we have a massive pile of data from all the particle experiments so far that haven't yet really contradicted it, can we take all that data and somehow "big data" out of it the "shape" of the specific distributional product Nature/our universe "prefers"? – The_Sympathizer Aug 18 '23 at 00:14
  • So the idea that we should conceive "quantum field theory as having an inherent scale" comes from the fact that "In practice, for most realistic theories it will turn out that you cannot compute this without hitting divergences even outside of perturbation theory"? (not taking into account the viewpoint of Epstein-Glaser renormalization) – orochi Aug 18 '23 at 00:38
  • @The_Sympathizer I probably wasn't clear enough about this point: We cannot, in general, define "the distributional product" - this is mathematically impossible. What we need to do is to define for each specific "product" term we write into our Lagrangian what we actually mean by it, and defining that is precisely choosing the renormalization parameters. E-G renormalization is not more powerful than the usual renormalization procedures, it just avoids running into explicit infinities. – ACuriousMind Aug 18 '23 at 07:27
  • 1
    @orochi This whole affair is much more complicated than I can explain in a single answer. In the end, the Wilsonian viewpoint has notions of UV limits/fixed points that do correspond to sending the cutoff to infinity without running into divergences, it's just that because we already "dealt" with renormalization properly in this approach this limit isn't related to the "naive" scale-less theory you'd have written down in a simple way. – ACuriousMind Aug 18 '23 at 07:37
  • @ACuriousMind: So basically, it's because each distributional product "stands for" an interaction, then what is being said is indeed yes, the classical theory is insufficient to determine a quantum analogue (regardless of applicability; it is a worse situation than the case of CM -> NRQM). All it can basically do is say "there should be an interaction" ... and then it's up to the theory maker to fill in the details of that interaction, right? – The_Sympathizer Aug 18 '23 at 13:10
  • Also not sure what you meant by "the distributional product is impossible to define ". The way it is phrased here and elsewhere simply suggests it is not well defined in that it is an ambiguous operation that admits many possible extensions. "Impossible to define" would suggest a "divide by zero" kind of thing where any attempt to define it causes a logical contradiction unless you are willing to give up some or another math rule somewhere else, at least to my ear. – The_Sympathizer Aug 18 '23 at 13:11
  • @The_Sympathizer What I mean is: You cannot define, in general, what it means to multiply one distribution with another. What you can do is look at specific expressions like $\phi(x)^4$ and figure out how to make that specific one finite and well-defined. And for that, the specific properties of the distribution matter quite a lot (E-G r. is also called causal perturbation theory because it depends heavily on causality properties of the quantum field). That's what I mean by "the" product being impossible to define. – ACuriousMind Aug 18 '23 at 14:11
  • As for the "fill in the details" part being "worse" than in NRQM - look at what the renormalization parameters usually are in well-behaved theories: Masses and interaction strengths. That's not so bad, that's the same kind of information you'd have had to put into the theory in a classical or NRQM version anyways! – ACuriousMind Aug 18 '23 at 14:11
  • @ACuriousMind: However, once you define it, is it rigorous enough then to prove that, up to the accuracy limit established by the regularizer, that say, the hydrogen atom exists there with no further assumptions, i.e. a fully relativistic model that goes beyond the Dirac equation? – The_Sympathizer Aug 18 '23 at 16:47