If a theory is perturbatively renormalizable (using old fashioned renormalization methods, not RG methods), will using different renormalization schemes give us the same answers? That is, will using different renormalization schemes give us the same empirically meaningful parameters?
1 Answers
The answer is either yes, or else physicists have essentially pulled the greatest heist of the century. The key to why the particular scheme you choose doesn't matter really comes down to the idea of a renormalization point, that is, the point in momentum space at which we fix the values of the correlators and compute the necessary counter terms. It's also interesting to note that, while RG methods come much later, the ideas are really inexorably tied to how renormalization works, so the only sense in which you can do renormalization without the renormalization group is if you put blind folds on and go out of your way to ignore the fact that you have some freedom in how you go about setting up your renormalization procedure.
Edit: Let me expound upon this last point some more. In any renormalization scheme you like, there are always some choices you need to make. In dimensional regularization you need to choose a mass scale $\mu$, in Pauli-Villars you need to choose a mass $M$, in cut-off you need to choose the actual cut-off $\Lambda$, the list goes on. There's always an ambiguity in how the actual procedure goes. In a very real sense, the renormalization group is the expression of the ambiguity. We derive the renormalization group equations, in fact, by explicitly demanding that physical quantities (correlators) should be independent of this ambiguity. So sure, you can fix this ambiguity by explicitly choosing your mass scale, mass or cutoff length, but that doesn't mean the ambiguity isn't there. That would be like saying that, because you've elected to work in Coulomb gauge in electrodynamics, gauge symmetry isn't there. It is there, you've just chosen to blind yourself to it.
Now, not all renormalization schemes make the selection of the renormalization point our choice, and not all of them make it clear what point is chosen...essentially, by losing explicit control of the renormalization point we gain computational simplicity and power.
Physicists are, of course, completely aware of these facts and if you go to the Particle Data Group website (which has many very helpful notes and all up-to-date data in particle physics), essentially all measurements are listed together with the method (renormalization scheme) used to compute them. This information tends to get left out of tables that appear on other sites, like Wikipedia (though not all pages ignore this information).
Edit: By the way, there's a nice discussion of some of these things in Nair's QFT book in detail, though that book tends to lean more heavily upon mathematics than some might like during a first pass at QFT. There is also a QFT book by Banks which contains essentially no detail (it's all left as exercises to the reader), but which does give a fairly nice discussion of a number of things (if I remember correctly). So it could be good if you're just looking to get at the big ideas of what's going on.
Edit 2: As Andrew rightly points out below, there is often ambiguity in the calculations we make, and the key is whether or not keeping said ambiguity actually buys us anything or not. So the counter point to the argument I have sort of been pushing for in the above would be: sure, that ambiguity is there, but do we actually gain anything by not simply fixing the ambiguity immediately as one might fix a gauge.
After all, one can play the game of electrodynamics without ever touching a vector potential (which suffers gauge ambiguity). You would very quickly run into problems which are exceptionally difficult to handle working only with the electric and magnetic fields, but which are quite solvable if you allow yourself to work with a vector potential. This is essentially the attitude that is taken in most first introductions to electrodynamics.
So what we see is that it can sometimes be advantageous in one way or another to work with objects that suffer from some ambiguity. The key fact we need to keep in mind for physics to ground ourselves is which objects are have no ambiguity in them (or at least, what kind of ambiguity we have where...all dimensionful quantities in physics are immediately ambiguous up to an overall scale -- the unit we choose, yet this is so familiar it causes us no concern). In quantum field theory, this if often the correlators since these are the objects which we can use to construct cross-sections, which are things we can then go out into the world and measure and hence which better have no ambiguity to them (well, units again).
- 4,889
-
Thanks! Could you elaborate a bit on what you mean by the last sentence of your first paragraph? For instance, in what way appealing to the RG gives us some way of figuring out which renormalization scheme to select? – jrex Feb 18 '21 at 21:22
-
1@jrex There you go. Added some references too. – Richard Myers Feb 18 '21 at 22:13
-
"In dimensional regularization you need to choose a mass scale $\mu$, in Pauli-Villars you need to choose a mass $M$, in cut-off you need to choose the actual cut-off $\Lambda$", the renormalization scale $\mu$ in dimensional regularization and the cut-off scale $\Lambda$ should NOT be conflated. The former is usually set to the physical energy scale when things are measured, while the later is sent to infinity at end of renormalization procedure. For details see here: https://physics.stackexchange.com/questions/498977/how-is-there-no-hierarchy-problem-without-uv-cutoff/499065#499065 – MadMax Feb 18 '21 at 22:49
-
Great answer, +1. Maybe just to expand slightly on your answer, I might say that there are always ambiguities in calculations -- to modify your $U(1)$ example, one could introduce even more gauge freedoms by introducing more redundant degrees of freedom into the formalism. The key point is that, while keeping track of every possible choice you could have made would make your notation exponentially complicated, sometimes you gain some power by keeping things general. $U(1)$ gauge symmetry is useful for making locality and Lorentz invariance manifest... – Andrew Feb 18 '21 at 23:24
-
...and the renormalization group is useful because you can keep track of decoupling of high energy degrees of freedom, and resum higher-order log corrections into a running coupling so that the tree level action is as accurate as possible for the energy scale of processes you are interested in. tl;dr: the importance is not just that the ambiguity is there, but that it is useful and can be exploited to make your calculation more accurate for fixed time spent calculating loops, at the cost of conceptual complexity. – Andrew Feb 18 '21 at 23:26
-
@MadMax Obviously $\mu$ and $\Lambda$ are not the same thing. You can put whatever interpretations on them you like, but in the end neither appears in your calculation of physical quantities since you absorb all such dependence into the counter terms. So for the sake of calculations, actually taking $\Lambda$ to infinity is entirely irrelevant. So while your point is valid, I think it is tangential to the question at hand. – Richard Myers Feb 19 '21 at 00:27
-
@Andrew Thank you for your comments. – Richard Myers Feb 19 '21 at 00:35
-
Thank you much for your comments, all! How do we explain what Stevenson (1981) says here when he says though that conventional perturbation theory will give us different answers, though? I can see why RG would help us, but I wanted to know how plain old renormalization strategies (especially using perturbation theory) could do the trick. Paper is here:https://journals.aps.org/prd/abstract/10.1103/PhysRevD.23.2916 – jrex Feb 19 '21 at 05:00
-
@jrex Everything described here is perturbation theory and is conventional. Actually they are precisely the same. Though functional methods are often used these days the diagram expansion is precisely the Dyson one derived from the Hamiltonian and displayed in the first chapters of standard books like Peskin and Schroeder. Just looking at the abstract of the linked article, I would venture to guess Stevenson is talking about tuning the arbitrary parameters we have been mentioning to improve convergence speed of the diagram series. This is what MadMax alluded to when they – Richard Myers Feb 19 '21 at 06:50
-
@jrex ...mentioned that $\mu$ is often taken to be some scale characteristic of the process of interest. This is purely to improve numerical behavior. I should also mention it's usually better to learn about these things from more canonical sources before jumping directly to the literature. If the source is too old, there used to be a great deal of confusion about these things. If the source is newer and more accurate, then it is going to assume essentially all the basics as given and move directly into specific problems, such as numerical convergence speeds. – Richard Myers Feb 19 '21 at 06:53