4

So, I'm seeking to prove the below identity, for $A,B$ vectors fields in $\mathbb{R}^3$:

$$A \times (\nabla \times B) + B \times (\nabla \times A) = \nabla (A \cdot B) - (A \cdot \nabla)B - (B \cdot \nabla)A \newcommand{\vp}{\varphi} \newcommand{\on}{\operatorname} \newcommand{\ve}{\varepsilon} \newcommand{\N}{\nabla} \newcommand{\ip}[2]{\left \langle #1, #2 \right \rangle} \newcommand{\para}[1]{\left( #1 \right)} \newcommand{\d}{\delta} \newcommand{\e}{\mathbf{\hat{e}}} \newcommand{\p}{\partial}$$

Specifically, I would like to do so with Einstein's summation notation and the Levi-Civita symbol, since I am new to those topics.


Known Identities: Let the following hold:

  • $\{\e_i\}_{i=1}^3$ is the standard Cartesian basis of $\mathbb{R}^3$
  • $U := (U_i)_{i=1}^3, V := (V_i)_{i=1}^3$ be vector fields
  • $\varphi$ is a scalar field
  • $\nabla := (\partial_i)_{i=1}^3$ as usual
  • $\delta_{i,j}$ is the usual Kronecker $\d$ ($0$ if $i \ne j$ and $1$ otherwise)
  • $\ve_{i,j,k}$ is the Levi-Civita symbol (which equals $\on{sign}(ijk)$ as a permutation in $S_3$, and $0$ if the indices are indistinct)

So, some identities I'm already familiar with, or am given (in the Einstein convention):

$$\begin{alignat*}{99} \on{grad}(\vp) &=\,& \N \vp &= \p_i \vp \; \e_i \\ \on{div}\para{V } &=\,& \N \cdot V &= \p_i V_i \\ \on{curl} \para{ V } &=\,& \N \times V &= \ve_{i,j,k} \cdot \p_j V_k \; \e_i \\ \ip U V &=\,& U \cdot V &= U_i V_i \\ &\,& U \times V &= \ve_{i,j,k} U_j V_k \; \e_i \\ &\,& \para{ U \cdot \N } V &= U_i (\p_i V_j) \e_j \end{alignat*}$$

Edit: Fixed a minor issue where I had the dot product running over two indices; clearly it should just be the one, not sure how I ended up with that. Fixed its use in the subsequent calculation.


Work So Far: So, to this end, I've derived the following:

$$\begin{align*} A \times \para{ \N \times B } &= A \times \Big( \ve_{i,j,k} \p_j B_k \e_i \Big) \tag{curl identity} \\ &= A \times \underbrace{\Big( \ve_{k,j,i} \p_j B_i \e_k \Big)}_{\substack{\text{$k$th component} \\ \text{of $\N \times B$}}} \tag{swap $k$ and $i$} \\ &= \ve_{i,j,k} \ve_{k,j,i} A_j \p_j B_i \e_i \tag{cross product identity} \\ &= -\ve_{i,j,k}^2 A_j \p_j B_i \e_i \tag{$\ve_{i,j,k} = -\ve_{k,j,i}$} \\ B \times \para{ \N \times A } &=-\ve_{i,j,k}^2 B_j \p_j A_i \e_i \tag{same reasoning} \\ \N \para{ A \cdot B } &= \N (A_j B_j) \tag{take the dot product} \\ &= \p_i (A_j B_j) \e_i \tag{gradient identity} \\ &= B_j \p_i A_j \e_i + A_j \p_i B_j \e_i \tag{product rule} \\ \para{ A \cdot \N } B &= A_i \p_i B_j \e_j \tag{given identity} \\ &= A_j \p_j B_i \e_i \tag{swap $i$ and $j$} \\ \para{ B \cdot \N} A &= B_i \p_i A_j \e_j \tag{given identity} \\ &= B_j \p_j A_i \e_i \tag{swap $i$ and $j$} \end{align*}$$ Now observe, $$\begin{align*} &\N \para{ A \cdot B } - \para{ A \cdot \N } B - \para{ B \cdot \N} A \\ &= \Big( B_j \p_i A_j + A_j \p_i B_j - A_i \p_j B_i - B_j \p_j A_i \Big) \e_i \\ &A \times \para{ \N \times B } + B \times \para{ \N \times A } \\ &= -\ve_{i,j,k}^2\Big( A_j \p_j B_i + B_j \p_j A_i \Big) \e_i \end{align*}$$


From here, I'm not entirely sure how to adequately handle everything. I have some questions:

  • Firstly, and most naturally, are my derivations so far actually correct to begin with? (My next couple of questions address specific doubts.) What would be the natural way to proceed from here, beyond sheer brute force?
  • Secondly, in my very final line with the Levi-Civita symbol above, I have a $k$ floating around, but not one in the previous line. This in particular hints to me at having done something wrong. If not wrong, how does one mitigate this?
  • Thirdly, in finding $\N(A \cdot B)$, I'm not really sure how to use the identity for the gradient that I am given. The identity $\N \vp = \p_i(\vp) \e_i$ hints a summation over all possible coordinates and introduces one in $i$. Was it right to, for $\N (A_i B_j)$, to still have that be done in $i$? Or should I do it in $k$?

I also have a couple of questions around manipulating the Levi-Civita symbol in the midst of Einstein summation. They might not be super-related/necessary for the above arguments, but it'd help clear some things up, I hope.

  • (Answered, see addendum below.) Fourthly, my textbook gave an identity for the product of Levi-Civita symbols, below, but I can't make sense of it. It reads off to me as being $\ve_{i,j,k} \ve_{k,i,m} = \d_{i,i} \d_{j,m} - \d_{j,i} \d_{i,m}$. I don't totally understand this though. Since $i,k$ are repeated here, shouldn't the summation be over $i,k \in \{1,2,3\}$, meaning $i$ in particular should not show up on the right-hand side? It certainly doesn't mirror other identities I've seen, and the use of an unsimplified $\d_{i,i} \equiv 1$ is somewhat suspect.

enter image description here

(Addendum: I found a different scan of the book, which was a bit cleaner. The identity stated was $\ve_{i,j,k} \cdot \ve_{k,\ell,m} = \d_{i,\ell} \cdot \d_{j,m} - \d_{j,\ell} \cdot \d_{i,m}$.)

  • I have seen others tout the like identity $\ve_{i,j,k} \cdot \ve_{\ell,m,k} = \d_{i,\ell} \d_{j,m} - \d_{i,m} \d_{j,\ell}$. Taking this identity at face value, if I understand correctly, since only the third index $k$ is repeated, we are implicitly summing over $k \in \{1,2,3\}$, right? i.e. the claim is equivalent to:
    $$ \forall i,j,m,\ell \in \{1,2,3\} \text{ we have } \sum_{k=1}^3 \ve_{i,j,k} \cdot \ve_{\ell,m,k} = \d_{i,\ell} \d_{j,m} - \d_{i,m} \d_{j,\ell} $$ (I guess we don't sum over $k$ on the right-hand side?) Also, if I wanted to manipulate this so that, say, the second index was fixed, how would I do that? It's clearly not as simple as replacing $k$ with $j$ and vice versa, or the same with $m$, I keep getting things that look "obviously wrong".

Sorry if some of these are incredibly basic; I'm very new to a lot of this approach to vector calculus (and wasn't good at it to begin with), and scouring MSE and Google for the exact sort of answers I need has not been proving useful. Thanks for any help you can provide.

PrincessEev
  • 43,815
  • The first issue is you have the square of $\epsilon$ in your double cross product. To fix that, let's write $(A \times D)i =\epsilon{ijk}A_j D_k$ where on the LHS we have the $i$th component of the cross product, so that we don't have to write the unit vectors $\mathbf{e}$ anywhere. Now let $D=\nabla \times B$ so that $D_k=\epsilon_{klm}\partial_l B_m$ then substitute back- the Levi Civita symbols have different indices. Generally, when you 'square' something you must use different indices in each copy: $(A \cdot B)^2=(A_i B_i)(A_j B_j)$ – Sal Aug 28 '22 at 13:19
  • Thirdly, using $( \nabla (A \cdot B))_i = \partial_i (A \cdot B)$ for the $i$th component you just use the product rule for differentiation: $\partial_i (A \cdot B)= \partial_i (A_j B_j)=A_j \partial_i B_j + B_j \partial_i A_j$ – Sal Aug 28 '22 at 13:25

2 Answers2

1

For (my) convenience we will consider the following notation : $$ A\times B = e_i A_j B_k \cdot [ijk]$$ where $\{e_i\}$ is a canonical basis and $[ijk]$ is the determinant ${\rm det}\ [e_ie_je_k]$.

Hence \begin{align*}A\times(\nabla\times B) &= A\times \bigg(e_i \frac{\partial }{\partial x_j} B_k [ijk]\bigg)\\& =e_s A_t \bigg(\frac{\partial }{\partial x_j} B_k [ijk] \bigg) [sti] \end{align*}

Here $s\neq t$ so that we will consider two cases : $$ j=s,\ k=t\ {\rm or}\ j=t,\ k=s $$

Hence we have $[ijk] [sti] =1 $ or $-1$ so that \begin{align*}A\times(\nabla\times B) &=\sum_{s\neq t} e_s \bigg(A_t \frac{\partial }{\partial x_s} B_t-A_t \frac{\partial }{\partial x_t} B_s \bigg) \\&=\sum_{s,t} e_s\bigg(A_t \frac{\partial }{\partial x_s} B_t-A_t \frac{\partial }{\partial x_t} B_s \bigg) \\&= \sum_{s,t} e_s \bigg(A_t \frac{\partial }{\partial x_s} B_t\bigg)- (A\cdot\nabla )B \end{align*} By symmetry of $A$ and $B$, we complete the proof.

HK Lee
  • 19,964
  • So as I'm trying to dissect this, I think one of the key things you did was, on the second application of a cross product, introduce a new set of indices. At least written the way I prefer for now, that would boil down to

    $$U \times V = \varepsilon_{i,j,k} U_j V_k e_i = \varepsilon_{k,j,i} U_j V_i e_k$$

    where the latter equality follows from the first just by swapping $i$ and $k$. When we get to the point in the argument where we'd have

    $$A \times (\nabla \times B) = A \times \Big( \varepsilon_{i,j,k} \partial_j B_k e_i \Big)$$

    we're using the first identity. [cont]

    – PrincessEev Aug 28 '22 at 20:52
  • But since the parenthetical is, on face value, the $i$th component, we use the permuted identity in the next step. But we don't use $j,k$ again, but instead new indices $s,t$ respectively. That would mean that

    $$A \times \Big( \varepsilon_{i,j,k} \partial_j B_k e_i \Big) = \varepsilon_{i,j,k} \varepsilon_{s,t,i} A_t \partial_j B_k e_s$$ [cont]

    – PrincessEev Aug 28 '22 at 20:52
  • Then from here, we break down the behavior of the product of the Levi-Civita symbols by considering cases and noting that (WLOG) $s \ne t$. This allows us to split up our sum in a conducive way to getting the other entities to appear and conclude.

    Am I more or less following everything correctly? Is there a reason as to why we would have to introduce a new set of indices on using subsequent cross products?

    – PrincessEev Aug 28 '22 at 20:52
  • Yes. you understand my posting correctly. And I can see that $[ijk]$ can be replaced by the notation $\epsilon_{ijk}$. – HK Lee Aug 28 '22 at 22:13
0

Too long for a comment:

Since you want to prove this equation using Einstein summation convention and the Levi-Civita symbol I do not see a point answering that list of questions. Just a few basic hints:

  • Write all expressions that contain a cross product as $U\times V=\varepsilon_{ijk}U^jV^k$ - regardless if $U$ and $V$ are ordinary vectors or $\nabla$.

  • This may answer one of your questions: The $i$ that I have "floating around" indicates that we contract only the indices $j$ and $k$ and that the result is a vector (indexed by $i$) - as it should. A clumsier notation for the same thing is $U\times V=\varepsilon_{ijk}U^jV^k\hat{\boldsymbol{e}}_i$. We do not need the $\hat{\boldsymbol{e}}_i$.

  • Write all expressions using a dot product as $U\cdot V=U^jV^j$.

  • This should be all you need to prove this identity.

Kurt G.
  • 14,198