3

In the book Quantum Mechanics: A Modern Development by Leslie E. Ballentine the Spectral Theorem is defined as follows (and I cite):

To each self-adjoint operator $A$ there corresponds a unique family of projection operators, $E(\lambda)$, for real $\lambda$, with the properties:

If $\lambda_1 < \lambda_2$ then $E(\lambda_1)E(\lambda_2)=E(\lambda_2)E(\lambda_1)=E(\lambda_1)$

If $\epsilon>0$, then $E(\lambda+\epsilon)|\psi\rangle \rightarrow E(\lambda)|\psi\rangle$ as $\epsilon\rightarrow 0$

$E(\lambda)|\psi\rangle \rightarrow 0$ as $\lambda\rightarrow-\infty$

$E(\lambda)|\psi\rangle \rightarrow |\psi\rangle$ as $\lambda \rightarrow +\infty$

$\displaystyle \int_{-\infty}^{\infty}\lambda \, \mathrm dE(\lambda)=A$

However, if we define $A$ to be a symmetric matrix (So $A=A^T$) then we can decompose $A$ as $A=T\Lambda T^{-1}$ with $T$ containing the eigenvectors of $A$ and $\Lambda$ being a diagonalized matrix containing the eigenvalues of $A$. How does this description of the spectral theorem for symmetric matrices apply to the definition given by Ballentine? I mean how do each step for defining the spectral theorem given by Ballentine support (or rather expand) the definition of the spectral theorem for symmetric matrices?

1 Answers1

6

Consider a finite-dimensional, complex Hilbert space $H$ and let $A$ denote a hermitian (self-adjoint) operator. For each eigenvalue $a\in\sigma(A)\subset \mathbb R$, let $P_a$ denote the (orthogonal) projection on the corresponding eigenspace, such that $$A=\sum\limits_{a\in \sigma(a)} a\, P_a \tag 1 \quad $$

and $$\sum\limits_{a\in \sigma (A)} P_a = \mathbb I \tag 2 \quad ,$$ with $P_a P_{a^\prime}=P_a\delta_{aa^\prime}$, which summarizes one version of the spectral theorem in finite dimensions. Now define $$E(\lambda):=\sum\limits_{\sigma(A)\ni a\leq \lambda} P_a = \sum\limits_{a \in \sigma(A) } \theta(\lambda-a)\, P_a \quad .\tag 3 $$

This gives the desired connection: By making use of the properties of the projection operators $P_a$, one can prove that the $E$ in $(3)$ is indeed a spectral family.

Conversely, starting from the more general version of the spectral theorem sketched in Ballentine's book, the "usual" spectral theorem in finite-dimensions can be recovered and it also follows that the (unique) spectral family is indeed given by $(3)$.

  • Thank you. I understand that the hermitian operator $A$ can be written as the sum of the product of each individual eigenvalue and projection operator. Is this exactly the matrix representation of the spectral theorem (written out, i.e. after computing each element of $T \Lambda T^{T}$)? Or is it a more general case (After all, the projection operators are usually matrices)? You also state that the sum of the projection operators is equal to the unit matrix, which is equivalent to $T T^{T}=I$ in the matrix description (but again, without summation of matrices). – Rasmus Andersen Jul 26 '23 at 14:05
  • 1
    Hi @RasmusAndersen, sorry, I don't fully understand. Can you elaborate? Yes, both versions, if formulated appropriately, should be the same. The point is that you can associate to each operator a matrix after choosing an orthonormal basis. And the spectral theorem in finite dimensions, although various forms exists, basically states that a hermitian operator admits a complete orthonormal eigenbasis (and can thus be diagonalized). Also, since we usually work with complex Hilbert spaces in QM, you should refer to hermitian matrices/ operators and use the adjoint instead of the transpose. – Tobias Fünke Jul 26 '23 at 14:11
  • I mean, if we define $A$ to be a symmetric matrix then its decomposition would be $U\Lambda U^{\dagger}$, with $U$ containing the eigenvectors, and $\Lambda$ containing the eigenvalues along the diagonal. How is this equivalent to writing $A$ as the sum of each eigenvalue multiplied with each projection operator as in eq. (1) of your answer? – Rasmus Andersen Jul 26 '23 at 14:20
  • Okay, you mention an important point, but why can a hermitian operator be diagonalized if it admits a complete orthonormal eigenbasis? – Rasmus Andersen Jul 26 '23 at 14:27
  • Well, your version of the theorem encodes what I've written above: There exists a complete orthonormal basis of eigenvectors of $A$. Now having such a basis, you can construct the projection operators: For example, if the eigenvalue $a$ is non-degenerate, the corresponding projection operator is simply $P_a=|a\rangle\langle a|$, where $|a\rangle$ is the corresponding eigenvector. And then it is a simple exercise to show eq. $(1)$. – Tobias Fünke Jul 26 '23 at 14:29
  • 1
    Well, if you have a complete orthonormal eigenbasis, then the matrix of $A$ in that basis is diagonal...The matrix elements of an operator $A$ in some orthonormal basis ${e_i}{i=1,2,\ldots,\dim H}$ are given by $A{ij}:=\langle e_i|A|e_j\rangle$. Now if this is the said eigenbasis, then $A_{ij}=\delta_{ij} a_i$ (sorry, here I changed the notation a bit, but it should be obvious) and hence the matrix is diagonal, where the diagonal elements are the eigenvalues. – Tobias Fünke Jul 26 '23 at 14:29
  • 1
    Okay, thank you very much. That clears up some of the confusion. – Rasmus Andersen Jul 26 '23 at 14:32