4

In my special relativity course the lecture notes say that in four dimensions a rank-2 anti-symmetric tensor has six independent non-zero elements which can always be written as components of 2 3-dimensional vectors, one polar and one axial.

For instance in the angular momentum tensor $L^{ab} = X^aP^b -X^bP^a$ the top row $L^{0i}=ct\vec{p}-(E/c)\vec{x}$ which is obviously polar (as $\vec{x}$ and $\vec{p}$ are polar vectors) while the spatial-spatial section contains the usual 3D angular momentum components which obviously represent the axial angular momentum $\vec{L}$ vector. (And the first column is just -1 times the polar vector due to the anti symmetry of the tensor).

The notes only explain this as ‘these components transforming in identical ways to polar and axial vectors’. I would like to know how to show this, possibly from the co-ordinate transformation rule for a 4D rank 2 contravariant tensor by showing it has equivalent effects of transforming these vector components.

Specifically the notes say ‘it works because those elements do transform as a vector under rotations’. I’m also confused as to why rotations specifically as a transformation are mentioned here.

Qmechanic
  • 201,751
Alex Gower
  • 2,574

3 Answers3

4
  1. OP is asking for the branching rules for $$\begin{align}H~:=~&O(3)\cr ~\cong~&\begin{pmatrix} O(3)&\cr &1\end{pmatrix}_{4\times 4}~~\cr ~\subseteq~& O(3,1)~=:~G.\end{align}\tag{1}$$

  2. The 4-vector representation decomposes as $${\bf 4}~\cong~\underbrace{\bf 3}_{\text{vector}}\oplus \underbrace{\bf 1}_{\text{scalar}}.\tag{2}$$

  3. Therefore the tensor product representation becomes $$\begin{align} {\bf 16} ~\cong~& {\bf 4}\otimes{\bf 4}\cr ~\cong~&({\bf 3}\oplus {\bf 1})\otimes({\bf 3}\oplus {\bf 1}) \cr ~\cong~&{\bf 3}\otimes{\bf 3}\oplus \overbrace{\underbrace{{\bf 3}\otimes{\bf 1} \oplus{\bf 1}\otimes{\bf 3}}_{~\cong~{\bf 3}_S ~ \oplus~ {\bf 3}_A}}^{\text{off-diagonal blocks}} \oplus {\bf 1}\otimes{\bf 1}.\tag{3} \end{align}$$ Here ${\bf 3}_S$ and ${\bf 3}_A$ denote the symmetric and antisymmetric combination of the off-diagonal blocks, respectively.

  4. The symmetric part of the tensor product ${\bf 4}\otimes{\bf 4}$ reads $${\bf 10}~\cong~ {\bf 4}\odot{\bf 4} ~\cong~\underbrace{{\bf 3}\odot {\bf 3}}_{~\cong~{\bf 5} ~ \oplus~ {\bf 1}}\oplus {\bf 3}_S \oplus {\bf 1},\tag{4} $$ while the antisymmetric part is

    $$ {\bf 6}~\cong~{\bf 4}\wedge{\bf 4} ~\cong~\underbrace{{\bf 3}\wedge {\bf 3}}_{\text{axial vector}}\oplus \underbrace{{\bf 3}_A}_{\text{vector}} ,\tag{5} $$

    cf. OP's title question. In eq. (5) we used Hodge duality in 3D, cf. e.g. this Phys.SE post.

Qmechanic
  • 201,751
3

Qmechanic's answer is beautiful. I'll clarify one non-obvious detail, namely why the $\textbf{3}\wedge \textbf{3}$ transforms as a vector under the identity component of the rotation group. (It doesn't transform as a vector under reflections, which is why we call it an axial vector.)

Let $F_{ab}$ be an antisymmetric tensor in 4d spacetime, and use $0$ for the "time" index and $\{1,2,3\}$ for the "space" indices. When Lorentz transformations are restricted to rotations, the components $F_{jk}$ with $j,k\in\{1,2,3\}$ do not mix with the component $F_{0k}=-F_{k0}$, so we can consider only the components $F_{jk}$. These are the components of the $\textbf{3}\wedge \textbf{3}$ in Qmechanic's answer.


For the rest of this answer, all indices (including $a,b,c$) are restricted to the spatial values $\{1,2,3\}$.

The antisymmetry condition, $F_{jk}=-F_{kj}$, implies that this has only $3$ independent components, which is the correct number of components for a vector, but something doesn't seem quite right: Under rotations, the transformation rule for a vector only uses one rotation matrix, but the transformation rule for $F_{jk}$ uses two rotation matrices — one for each index. How can these possibly be equivalent to each other? Of course they're not equivalent to each other for rotations with determinant $-1$, which is why we call it an axial vector, but they are equivalent to each other for rotations with determinant $+1$, and the purpose of this answer is to explain why that's true.

Let $R_{jk}$ be the components of a rotation matrix whose determinant is $+1$. This condition means $$ \sum_{j,k,m}\epsilon_{jkl}R_{1j}R_{2k}R_{3m} = 1, \tag{1} $$ which can also be written $$ \epsilon_{abc} = \sum_{j,k,m}\epsilon_{jkm}R_{aj}R_{bk}R_{cm}. \tag{2} $$ The fact that $R$ is a rotation matrix also implies $$ \sum_c R_{cm}R_{cn}=\delta_{mn}, \tag{3} $$ which the component version of the matrix equation $R^TR=1$. Contract (2) with $R_{cn}$ and then use (3) to get $$ \sum_c\epsilon_{abc}R_{cn} = \sum_{j,k}\epsilon_{jkn}R_{aj}R_{bk}. \tag{4} $$ Equation (4) is the key. The effect of a rotation on $F_{jk}$ is $$ F_{jk}\to \sum_{a,b}R_{aj}R_{bk}F_{ab}, \tag{5} $$ with one rotation matrix for each index. Since $F_{ab}$ is antisymmetric, we can represent it using only three components like this: $$ v_m\equiv\sum_{j,k}\epsilon_{jkm}F_{jk} \tag{6} $$ The question is, how does $v$ transform under a rotation whose determinant is $+1$? To answer this, use (5) to get $$ v_m\to v_m'=\sum_{j,k}\epsilon_{jkm}\sum_{a,b}R_{aj}R_{bk}F_{ab} \tag{7} $$ and then use (4) to get $$ v_m' =\sum_{a,b,c} \sum_c\epsilon_{abc}R_{cm}F_{ab} =\sum_c R_{cm} v_m. \tag{8} $$ This shows that $v$ transforms like a vector under rotations whose determinant is $+1$.

For rotations whose determinant is $-1$ (reflections), the right-hand side of equation (1) is replaced by $-1$, which introduces a minus sign in equation (4), which ends up putting a minus sign in equation (8). That's why we call $v$ an axial vector instead of just a vector.


More generally, in $N$-dimensional space:

  • Pseudovector and axial vector are synonymous with "completely antisymmetric tensor of rank $N-1$." Intuitively, an ordinary (polar) vector has only one index, and a pseudovector/axial vector is missing only one index. As a result, they both transform the same way under rotations, but only under rotations. They transform differently in other respects, including relfections and dilations.

  • Under an arbitrary coordinate transform, a (polar) vector transforms as $v_{j}\to \Lambda^a_j v_{a}$.

  • Under an arbitrary coordinate transform, a rank-2 tensor transforms as $F_{jk}\to \Lambda^a_j\Lambda^b_k F_{ab}$. (The components of $\Lambda$ are the partial derivatives of one coordinate system's coordinates with respect to the other's. Sums over repeated indices are implied.)

  • If $N\neq 3$, then angular momentum is an antisymmetric rank-2 tensor (also called a bivector), not an axial vector. A bivector has 2 indices, but an axial vector has $N-1$ indices.

  • To illustrate the different transformation laws for (polar) vectors and bivectors, consider a dilation (also called dilatation) that multiplies the spatial coordinates by a constant factor $\kappa$. Then each factor of $\Lambda$ contributes one factor of $\kappa$, so $F_{jk}\to\kappa^2 F_{jk}$, but a vector goes like $v_j\to \kappa v_j$.

Axial vectors and bivectors are the same in 3d space, but they are not really vectors at all, even though they both happen to have 3 components in 3d space. If we only consider rotations (with determinant $+1$), then they might as well be vectors, but even that's only true in 3d space, not in other-dimensional spaces.

0

Reference : My answer here Vector product in a 4-dimensional Minkowski spacetime.

$=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=$

In my above referenced answer from two 4-vectors $\mathbf{X}\boldsymbol{=}\left(\mathbf{x},x_4\right)$ and $\mathbf{P}\boldsymbol{=}\left(\mathbf{p},p_4\right)$, see also equations (15a) and (15b) therein, we had defined as their outer product the antisymmetric $4\times 4$ matrix \begin{equation} \left[\,\mathbf{H}\,\right] \boldsymbol{=}\left[\,\mathbf{X}\boldsymbol{\times}\mathbf{P}\,\right]\boldsymbol{\equiv} \begin{bmatrix} \begin{array}{ccc|c} \hphantom{\boldsymbol{=}}0 & \boldsymbol{-}\mathrm h_3 & \boldsymbol{+}\mathrm h_2 & \boldsymbol{+}\mathrm g_1\vphantom{\dfrac{a}{b}}\\ \boldsymbol{+}\mathrm h_3 & \hphantom{\boldsymbol{=}}0 & \boldsymbol{-}\mathrm h_1 & \boldsymbol{+}\mathrm g_2\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\mathrm h_2 & \boldsymbol{+}\mathrm h_1 & \hphantom{\boldsymbol{=}}0 & \boldsymbol{+}\mathrm g_3\vphantom{\dfrac{a}{b}}\\ \hline \boldsymbol{-}\mathrm g_1 & \boldsymbol{-}\mathrm g_2 & \boldsymbol{-}\mathrm g_3 & \hphantom{\boldsymbol{=}}0\vphantom{\dfrac{a}{b}} \end{array} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} & & & \vphantom{\dfrac{a}{b}}\\ & \mathbf{h}\boldsymbol{\times} & & \mathbf{g} \vphantom{\dfrac{a}{b}}\\ & & & \vphantom{\dfrac{a}{b}}\\ \hline & \boldsymbol{-}\mathbf{g}^{\mathsf{T}} & & 0\vphantom{\dfrac{a}{b}} \end{array} \end{bmatrix} \tag{A-01}\label{A-01} \end{equation} where \begin{equation} \mathbf{h}\boldsymbol{=}\mathbf{x}\boldsymbol{\times}\mathbf{p}\,,\quad \mathbf{g}\boldsymbol{=}x_4\mathbf{p}\boldsymbol{-}p_4\mathbf{x} \tag{A-02}\label{A-02} \end{equation} For the details of this definition see equations (16)-(21) therein.

Moreover, based on this, from the space-time position and relativistic linear momentum of a particle respectively \begin{equation} \mathbf{X} \boldsymbol{=}\left(\mathbf{x}, ct\right) \qquad \mathbf{P} \boldsymbol{=}\left(\gamma m_{0}\mathbf{u}, \gamma m_{0} c\right) \tag{A-03}\label{A-03} \end{equation} we had defined as relativistic angular momentum the antisymmetric $4\times 4$ matrix \begin{equation} \left[\,\mathbf{H}\,\right] \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} & & & \vphantom{\dfrac{a}{b}}\\ & \left[\,\mathbf{x}\boldsymbol{\times}\mathbf{p}\,\right] & & \left(ct\mathbf{p}\boldsymbol{-}\gamma m_{0}c\mathbf{x}\vphantom{\tfrac{a}{b}}\right) \vphantom{\dfrac{a}{b}}\\ & & & \vphantom{\dfrac{a}{b}}\\ \hline & \boldsymbol{-}\left(ct\,\mathbf{p}\boldsymbol{-}\gamma m_{0}c\,\mathbf{x}\vphantom{\tfrac{a}{b}}\right)^{\mathsf{T}} & & 0\vphantom{\dfrac{\tfrac{a}{b}}{b}} \end{array} \end{bmatrix} \tag{A-04}\label{A-04} \end{equation} the real 6-vector $\mathbf{H}$ being (as in the question) \begin{equation} \mathbf{H} \boldsymbol{=} \begin{bmatrix} \mathbf{h}\vphantom{\dfrac{\tfrac{a}{b}}{b}}\\ \mathbf{g}\vphantom{\dfrac{a}{\tfrac{a}{b}}} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \mathbf{x}\boldsymbol{\times}\mathbf{p}\vphantom{\dfrac{\tfrac{a}{b}}{b}}\\ ct\mathbf{p}\boldsymbol{-}\gamma m_{0}c\mathbf{x}\vphantom{\dfrac{a}{\tfrac{a}{b}}} \end{bmatrix} \tag{A-05}\label{A-05} \end{equation} It's interesting to see how the antisymmetric $4\times 4$ matrix $\left[\,\mathbf{H}\,\right]$ of equation \eqref{A-01} is transformed under a Lorentz boost \begin{equation} \mathrm L \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} & & & \vphantom{\dfrac{a}{b}}\\ &\mathrm I\boldsymbol{+}\dfrac{\gamma^2}{c^2\left(\gamma\boldsymbol{+}1\right)}\boldsymbol{\upsilon}\boldsymbol{\upsilon}^{\mathsf{T}} & & \boldsymbol{-}\gamma\dfrac{\boldsymbol{\upsilon}}{c}\vphantom{\dfrac{a}{b}}\\ & & & \vphantom{\dfrac{a}{b}}\\ \hline & \boldsymbol{-}\gamma\dfrac{\boldsymbol{\upsilon}^{\mathsf{T}}}{c} & & \gamma\vphantom{\dfrac{\dfrac{a}{b}}{b}} \end{array} \end{bmatrix} \tag{A-06}\label{A-06} \end{equation} We have \begin{equation} \left[\,\mathbf{H}'\,\right] \boldsymbol{=}\left[\,\mathbf{X}'\boldsymbol{\times}\mathbf{P}'\,\right]\boldsymbol{=}\left[\,\left(\mathrm L\mathbf{X}\right)\boldsymbol{\times}\left(\mathrm L\mathbf{P}\right)\,\right]\boldsymbol{=}\mathrm L\left[\,\mathbf{X}\boldsymbol{\times}\mathbf{P}\,\right]\mathrm L \boldsymbol{=}\mathrm L\left[\,\mathbf{H}\,\right]\mathrm L \tag{A-07}\label{A-07} \end{equation} hence \begin{equation} \left[\,\mathbf{H}'\,\right] \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} & & & \vphantom{\dfrac{a}{b}}\\ & \mathbf{h}'\boldsymbol{\times} & & \mathbf{g}' \vphantom{\dfrac{a}{b}}\\ & & & \vphantom{\dfrac{a}{b}}\\ \hline & \boldsymbol{-}\mathbf{g}'^{\mathsf{T}} & & 0\vphantom{\dfrac{a}{b}} \end{array} \end{bmatrix} \tag{A-08}\label{A-08} \end{equation} where \begin{align} \mathbf{h}' & \boldsymbol{=} \gamma\mathbf{h}\boldsymbol{-}\dfrac{\gamma^2}{c^2\left(\gamma\boldsymbol{+}1\right)}\left(\mathbf{h}\boldsymbol{\cdot}\boldsymbol{\upsilon}\right)\boldsymbol{\upsilon}\vphantom{A^{1/2}}\boldsymbol{-}\dfrac{\gamma}{c}\left(\boldsymbol{\upsilon}\boldsymbol{\times}\mathbf{g}\vphantom{A^2}\right) \tag{A-09a}\label{A-09a}\\ \mathbf{g}' & \boldsymbol{=}\gamma\mathbf{g}\boldsymbol{-}\dfrac{\gamma^2}{c^2\left(\gamma\boldsymbol{+}1\right)}\left(\mathbf{g}\boldsymbol{\cdot}\boldsymbol{\upsilon}\vphantom{A^2}\right)\boldsymbol{\upsilon}\boldsymbol{+}\dfrac{\gamma}{c}\left(\boldsymbol{\upsilon}\boldsymbol{\times}\mathbf{h}\vphantom{A^2}\right) \tag{A-09b}\label{A-09b} \end{align} If, by a similar way, we apply the Lorentz boost \eqref{A-06} to the antisymmetric matrix of the electromagnetic field \begin{equation} \mathcal{E\!\!\!\!E} \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} & & & \vphantom{\dfrac{a}{b}}\\ & \left[\,c\,\mathbf{B}\,\right] & & \boldsymbol{+}\mathbf{E} \vphantom{\dfrac{a}{b}}\\ & & & \vphantom{\dfrac{a}{b}}\\ \hline & \boldsymbol{-}\mathbf{E}^{\boldsymbol{\top}} & & 0\vphantom{\dfrac{a}{b}} \end{array} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} 0 & \boldsymbol{-}c\,B_3 & \boldsymbol{+}c\,B_2 & \boldsymbol{+}E_1\vphantom{\dfrac{a}{b}}\\ \boldsymbol{+}c\,B_3 & 0 & \boldsymbol{-}c\,B_1 & \boldsymbol{+}E_2 \vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}c\,B_2 & \boldsymbol{+}c\,B_1 & 0 & \boldsymbol{+}E_3\vphantom{\dfrac{a}{b}}\\ \hline \boldsymbol{-}E_1 & \boldsymbol{-}E_2 & \boldsymbol{-}E_3 & 0\vphantom{\dfrac{a}{b}} \end{array} \end{bmatrix} \tag{A-10}\label{A-10} \end{equation} see equations (28)-(31) in my referenced answer, then we have \begin{align} \mathbf{B}' & \boldsymbol{=} \gamma \mathbf{B}\boldsymbol{-}\dfrac{\gamma^2}{c^2\left(\gamma\boldsymbol{+}1\right)}\left(\mathbf{B}\boldsymbol{\cdot}\boldsymbol{\upsilon}\right)\boldsymbol{\upsilon}\vphantom{A^{1/2}}\boldsymbol{-}\dfrac{\gamma}{c^2}\left(\boldsymbol{\upsilon}\boldsymbol{\times}\mathbf{E}\vphantom{A^{1/2}}\right) \tag{A-11a}\label{A-11a}\\ \mathbf{E}' & \boldsymbol{=} \gamma\mathbf{E}\boldsymbol{-}\dfrac{\gamma^2}{c^2\left(\gamma\boldsymbol{+}1\right)}\left(\mathbf{E}\boldsymbol{\cdot}\boldsymbol{\upsilon}\vphantom{A^2}\right)\boldsymbol{\upsilon}\boldsymbol{+}\gamma\left(\boldsymbol{\upsilon}\boldsymbol{\times}\mathbf{B}\vphantom{A^2}\right) \tag{A-11b}\label{A-11b} \end{align} as we meet in many textbooks and answers in PSE.

Under the $4\times 4$ transformation \begin{equation} \mathrm R \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} & & & \vphantom{\dfrac{a}{b}}\\ &\hphantom{==}\mathcal R \hphantom{==}& & \hphantom{=}\mathbf{O}\hphantom{=}\vphantom{\dfrac{a}{b}}\\ & & & \vphantom{\dfrac{a}{b}}\\ \hline & \mathbf{O}^{\boldsymbol{\top}} & & 1\vphantom{\dfrac{\dfrac{a}{b}}{b}} \end{array} \end{bmatrix} \tag{A-12}\label{A-12} \end{equation}
where $\mathcal R$ is an orthonormal $3\times 3$ matrix \begin{equation} \mathcal R \mathcal R^{\mathsf{T}}\boldsymbol{=}\mathrm I_{3\times 3}\boldsymbol{=}\mathcal R^{\mathsf{T}}\mathcal R \,,\qquad \boldsymbol{\vert} \det(\mathcal R)\boldsymbol{\vert}\boldsymbol{=}1 \tag{A-13}\label{A-13} \end{equation}
the antisymmetric $4\times 4$ matrix $\left[\,\mathbf{H}\,\right]$ of equation \eqref{A-01} is transformed as follows \begin{equation} \left[\,\mathbf{H}'\,\right] \boldsymbol{=}\left[\,\mathbf{X}'\boldsymbol{\times}\mathbf{P}'\,\right]\boldsymbol{=}\left[\,\left(\mathrm R\mathbf{X}\right)\boldsymbol{\times}\left(\mathrm R\mathbf{P}\right)\,\right]\boldsymbol{=}\mathrm R\left[\,\mathbf{X}\boldsymbol{\times}\mathbf{P}\,\right]\mathrm R^{\mathsf{T}} \tag{A-14}\label{A-14} \end{equation} hence \begin{equation} \left[\,\mathbf{H}'\,\right] \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} & & & \vphantom{\dfrac{a}{b}}\\ & \mathbf{h}'\boldsymbol{\times} & & \mathbf{g}' \vphantom{\dfrac{a}{b}}\\ & & & \vphantom{\dfrac{a}{b}}\\ \hline & \boldsymbol{-}\mathbf{g}'^{\mathsf{T}} & & 0\vphantom{\dfrac{a}{b}} \end{array} \end{bmatrix} \tag{A-15}\label{A-15} \end{equation} where \begin{align} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathbf{h}'\boldsymbol{\times} & \boldsymbol{=} \mathcal R \left(\mathbf{h}\boldsymbol{\times}\right)\mathcal R^{\mathsf{T}}\boldsymbol{=}\mathcal R \left[\left(\mathbf{x}\boldsymbol{\times}\mathbf{p}\right)\boldsymbol{\times}\vphantom{\tfrac{a}{b}}\right]\mathcal R^{\mathsf{T}} \nonumber\\ & \boldsymbol{=}\left[\mathcal R\mathbf{x}\boldsymbol{\times}\mathcal R\mathbf{p}\vphantom{\tfrac{a}{b}}\right]\boldsymbol{\times}\stackrel{\eqref{A-18}}{\boldsymbol{=\!=\!=}}\det(\mathcal R)\cdot\left[\mathcal R \left(\mathbf{x}\boldsymbol{\times}\mathbf{p}\right)\vphantom{\tfrac{a}{b}}\right]\boldsymbol{\times}\boldsymbol{=}\det(\mathcal R)\cdot\mathcal R\mathbf{h}\boldsymbol{\times}\boldsymbol{\Longrightarrow} \nonumber\\ \mathbf{h}'& \boldsymbol{=}\det(\mathcal R)\cdot\mathcal R\mathbf{h} \tag{A-16a}\label{A-16a}\\ \mathbf{g}' & \boldsymbol{=}\mathcal R \mathbf{g} \tag{A-16b}\label{A-16b} \end{align}

$=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=$

If $\:\mathbf{a},\:\mathbf{b} $ are complex $\:3$-vectors in $\:\mathbb{C}^{3}\:$ and $\:\mathcal M\:$ an invertible linear transformation in this space then \begin{equation} \mathcal M\mathbf{a} \boldsymbol{\times} \mathcal M\mathbf{b} \boldsymbol{=} \left[\;\det\left(\mathcal M\right)\cdot\left(\mathcal M^{-1}\right)^{\mathsf{T}}\; \right]\left(\mathbf{a} \boldsymbol{\times}\mathbf{b}\right) \tag{A-17}\label{A-17} \end{equation} If moreover $\:\mathcal M\:$ is a real orthonormal matrix then $\left(\mathcal M^{-1}\right)^{\mathsf{T}}\boldsymbol{=}\mathcal M$ and $\det\left(\mathcal M\right)\boldsymbol{=}\boldsymbol{\pm}1$ hence \begin{equation} \mathcal M\mathbf{a} \boldsymbol{\times} \mathcal M\mathbf{b} \boldsymbol{=} \det\left(\mathcal M\right)\cdot\mathcal M\left(\mathbf{a} \boldsymbol{\times}\mathbf{b}\right)\boldsymbol{=}\boldsymbol{\pm}\,\mathcal M\left(\mathbf{a} \boldsymbol{\times}\mathbf{b}\right) \tag{A-18}\label{A-18} \end{equation} For a proof of identity \eqref{A-17} see $\textbf{Section B}$ in my answer as user82794 (former diracpaul) here

How to get result $\boldsymbol{3}\boldsymbol{\otimes}\boldsymbol{3} =\boldsymbol{6}\boldsymbol{\oplus}\bar{\boldsymbol{3}}$ for SU(3) irreducible representations?

Frobenius
  • 15,613