0

For the unitary translation operator $\hat{X}_a$ in 1 dimension it is easy to show that its matrix elements can be expressed purely with a delta distribution or equivalently as a combination of a differntial operator with a delta distribution.

Definition of its matrix elements: $\int \mathrm{d}x' X_a(x,x') \psi(x') = \psi(x-a)$

With delta distibution: $X_a(x,x') = \delta(x'-(x-a))$

Mixed with differential operator: $X_a(x,x') = \mathrm{e}^{-a\partial_x}\delta(x'-x)$

$\psi(x-a) = \int \mathrm{d}x'\delta(x'-(x-a))\psi(x') = c\int \mathrm{d}x'\mathrm{d}k\ e^{\mathrm{i}kx'}e^{-\mathrm{i}kx}e^{\mathrm{i}ka}\psi(x') = c\int \mathrm{d}x'\mathrm{d}k\ e^{\mathrm{i}kx'}e^{-a\partial_x}e^{-\mathrm{i}kx}\psi(x') = ce^{-a\partial_x}\int \mathrm{d}x'\mathrm{d}k\ e^{\mathrm{i}k(x'-x)}\psi(x') = e^{-a\partial_x}\int \mathrm{d}x' \delta(x'-x)\psi(x')\quad q.e.d.$


Now I want to have the same scheme for a Lorentz boost $\hat{\Lambda}$. I want to have the scalar product defined as integral over $x$ and $t$. (I just want to have it this way, so please don't start a discussion that in QFT the scalar product is defined differently.)

Definition of its matrix elements: $\int \mathrm{d}t'\mathrm{d}x' \Lambda(t,t',x,x') \psi(t',x') = \psi(\gamma t - \gamma\beta x, \gamma x - \gamma\beta t)$

With delta distibution: $\Lambda(t,t',x,x') = \delta(t'-(\gamma t - \gamma\beta x))\cdot\delta(x'-(\gamma x - \gamma\beta t))$

Mixed with differential operator: $\Lambda(t,t',x,x') =\ \textbf{???}$


And here I'm stuck. I found this Infinite-dimensional representation of Lorentz algebra for an infinitesimal boost on a scalar. So I started with $x\partial_t + t\partial_x$ and the https://en.wikipedia.org/wiki/Baker%E2%80%93Campbell%E2%80%93Hausdorff_formula .

Set $X=x\partial_t$ and $Y=t\partial_x$. Then it follows

$[X,Y]=x\partial_x - t\partial_t$

$[X,[X,Y]]=-2X$ and $[Y,[Y,X]]=-2Y$

With the help of these it follows

$[Y,[X,[X,Y]]]=2[X,Y]$ and $[Y,[Y,[Y,X]]]=0$

Furthermore

$[[[[Y,X],Y],X],Y]=4Y$ and $[[[[X,Y],X],Y],X]=4X$

So the first terms of $Z(X,Y)$ sum up to

$Z(X,Y) = \left(1-\frac{1}{6}+\frac{1}{30}+... \right)(X+Y) + \left(\frac{1}{2}+\frac{1}{12}+... \right)[X,Y]$

and I expect a lot of work. Therefore the question: is there any ready-to-use formula for the $\textbf{???}$ above?


Update

The proposal of Thomas with $\beta=\tanh\zeta$, $\gamma=\cosh\zeta$, $\gamma\beta=\sinh\zeta$ gives us when Fourier transforming the delta distributions:

$\Lambda(t,t',x,x') = c^2 \int \mathrm{d}\omega\mathrm{d}k\ e^{\mathrm{i}\omega t'} e^{\mathrm{i}k x'} e^{-\zeta\ (x\partial t + t \partial x)} e^{-\mathrm{i}\omega t} e^{-\mathrm{i}k x}$

On the other hand the variant with the product of two $\delta$s gives us:

$\Lambda(t,t',x,x') = c^2 \int \mathrm{d}\omega\mathrm{d}k\ e^{\mathrm{i}\omega t'} e^{\mathrm{i}k x'} e^{-\mathrm{i}\omega t\cosh\zeta}e^{-\mathrm{i}k x\cosh\zeta}e^{\mathrm{i}k t\sinh\zeta}e^{\mathrm{i}\omega x\sinh\zeta} $

I guess if both shall be true we must show that

$e^{-\zeta\ (x\partial t + t \partial x)} e^{-\mathrm{i}\omega t} e^{-\mathrm{i}k x} = e^{-\mathrm{i}\omega t\cosh\zeta}e^{-\mathrm{i}k x\cosh\zeta}e^{\mathrm{i}k t\sinh\zeta}e^{\mathrm{i}\omega x\sinh\zeta}$

and the very left $e$ function will expand to a very ugly term according to the Baker-Campbell-Hausdorff formula where convergence might become a problem. Maybe the assumption (both forms of the matrix elements are equivalent) is wrong. But then I'd like to know why. Or the assumption is correct, then one should find a proof of the equivalence somewhere.

Harald Rieder
  • 117
  • 10

2 Answers2

2

Motivated from Lorentz transformation (Coordinate transformation) my guess would be $$\Lambda(t,t',x,x') = e^{-\zeta(x\partial_t+t\partial_x)}\delta(t'-t)\delta(x'-x)$$ where $\zeta$ is the rapidity, defined by $\tanh\zeta=\beta$. But I cannot give a proof for it.

Instead of proving $$\delta(t'-(\gamma t-\gamma\beta x))\delta(x'-(\gamma x-\gamma\beta t)) = e^{-\zeta(x\partial_t+t\partial_x)} \delta(t'-t)\delta(x'-x) \tag{1}$$

it is actually easier to prove the more general $$\psi(\gamma t-\gamma\beta x,\gamma x-\gamma\beta t) = e^{-\zeta(x\partial_t+t\partial_x)}\psi(t,x) \tag{2}$$ for arbitrary functions $\psi$. Then (1) will follow from (2) as the special case $\psi(t,x)=\delta(t'-t)\delta(x'-x)$.

Let us first consider the arguments of $\psi$ on the left side of (2) and rewrite them. $$\begin{align} \begin{pmatrix} \gamma t-\gamma\beta x \\ \gamma x-\gamma\beta t \end{pmatrix} &=\begin{pmatrix} \gamma & -\gamma\beta \\ -\gamma\beta & \gamma \end{pmatrix} \begin{pmatrix}t \\ x \end{pmatrix} \\ &=\begin{pmatrix} \cosh\zeta & -\sinh\zeta \\ -\sinh\zeta & \cosh\zeta \end{pmatrix} \begin{pmatrix}t \\ x \end{pmatrix} \\ &=\exp\begin{pmatrix} 0 & -\zeta \\ -\zeta & 0 \end{pmatrix} \begin{pmatrix} t \\ x \end{pmatrix} \\ &\underset{n\to\infty}{=}\begin{pmatrix} 1 & -\zeta/n \\ -\zeta/n & 1 \end{pmatrix}^n \begin{pmatrix} t \\ x \end{pmatrix} \tag{3} \end{align}$$

That means we get the left side of (2) by applying the infinitesimal transformation $$\psi(t,x) \mapsto \psi\left(t-\frac{\zeta}{n}x,x-\frac{\zeta}{n}t\right) \tag{4}$$ $n$ times, and letting $n\to\infty$. Now, for large $n$ and neglecting terms $O(1/n^2)$, transformation (4) is (by the definition of partial derivatives) equal to the transformation $$\psi(t,x) \mapsto \left(1-\frac{\zeta}{n}x\partial_t-\frac{\zeta}{n}t\partial_x\right) \psi(t,x) \tag{5}$$

Doing transformation (5) $n$ times we get $$\begin{align} \psi(t,x) &\mapsto \left(1-\frac{\zeta}{n}x\partial_t-\frac{\zeta}{n}t\partial_x\right)^n \psi(t,x) \\ &\underset{n\to\infty}{=} e^{-\zeta(x\partial_t+t\partial_x)}\psi(t,x) \tag{6} \end{align}$$ So we finally got the right side of (2), q.e.d.

Disclaimer: Some steps above likely miss the mathematical rigor. But for a physicist (like me) it is good enough.

  • This looks reasonable. I added an Update section to my question where you can see that a proof of equivalence to the other form means much work. – Harald Rieder Jul 21 '23 at 09:20
  • @HaraldRieder I have added a proof (elaborating the ideas from Cosmas Zachos' answer). – Thomas Fritsch Jul 22 '23 at 12:19
  • 1
    Thank you, Thomas! $\psi(t,x)=\delta(t'-t)\delta(x'-x)$ yes of course, then it becomes easy. So it just was a stupid idea trying it via Fourier transformations. – Harald Rieder Jul 24 '23 at 18:40
2

I might expedite/explain the sound answer of @Thomas Fritsch, which the comment format makes all but impossible. With due apologies to the OP, an integral kernel (propagator) which is a delta function is pointless.

One supplants it by vector field propagation as analyzed in the linked answer of mine, $$ \psi(t,x)\mapsto \psi(\cosh \zeta ~~t -\sinh \zeta ~~x, \cosh \zeta ~~x -\sinh \zeta ~~t)\\ =e^{-\zeta (x\partial_t + t\partial_x)} \psi(t,x). $$ The exponential is then merely the standard infinite repetition limit of the infinitesimal boost (commuting with itself), $$ e^{-\zeta (x\partial_t + t\partial_x)}= \lim_{n\to \infty} \left (1-\frac{\zeta}{n} (x\partial_t + t\partial_x) \right )^n .$$

Now, if you wish to replace your arbitrary test scalar by an integral of a delta function, that's an option, mysterious as it might appear to an outsider... The connection of rotations to shifts is always tortured, and I don't fully grasp the implicit motivation here.

Cosmas Zachos
  • 62,595
  • Motivation? Here it comes. Short story: I once learned that abstract operators $\hat(O)$ will get matrix elements when a basis is chosen. I was wondering why you everywhere find $-\zeta (x\partial_t + t\partial_x)$ but nowhere a matrix (here: a function of multiple variables). – Harald Rieder Jul 24 '23 at 18:53
  • Long story: the "with delta distribution" matrix representation can conveniently be applied to a given function $\psi(t,x)$. Thereby entanglement between t- and x- spaces gets changed. And this I need for an idea that observers observe changes of entanglement between t- and x-space. The whole story is here: https://drive.google.com/file/d/1QBa3pBlB3BKuZXR7mlOHTG96iTsc2-RV/view?usp=drive_link – Harald Rieder Jul 24 '23 at 18:59