4

Find an invertible matrix $P$ and a matrix $C$ of the form

$C=\begin{pmatrix}a & -b\\b & a\end{pmatrix}$

such that the given matrix $A$ has the form $A = PCP^{-1}$

$A=\begin{pmatrix}5 & -2\\1 & 3\end{pmatrix}$

The first thing i tried to do was to find the eigenvectors of matrix $A$ and i got these vectors (which i glued together to get matrix $P$ and $P^{-1}$)

$P=\begin{pmatrix}1+ i& 1-i\\1 & 1\end{pmatrix}$

$P^{-1}=\begin{pmatrix}\frac{1}{2i} & \frac{-1+i}{2i}\\-\frac{1}{2i} & \frac{1+i}{2i}\end{pmatrix}$

Im not sure how to find the matrix $C$, i thought at first i could plug in the eigenvalues in the $C$ matrix, but i don't think that is what they problem i asking me to do.

Any help will be appreciated

Gibbs
  • 8,230

3 Answers3

2

The characteristic polynomial of $A$ has two complex roots: $\lambda_1 = a + ib$ and $\lambda_2 = a - ib$.

Since, $A$ and $C$ share the same eigenvalues (in dimension 2 it is equivalent to checking that the determinant and trace of both matrices are equal), they are similar and such a matrix $P$ exists.

If $b=0$ ($\lambda_1=\lambda_2=\lambda$), existence of $P$ is not guaranteed unless you can show that $\text{Ker}(A-\lambda I)$ has dimension 2.

So, in a nutshell, you have to find the eigenvalues of $A$ and take the real and imaginary parts to get the entries of $C$.

Bill O'Haran
  • 3,034
  • I'm guessing from your answer that you found out for yourself :) – Bill O'Haran Apr 17 '18 at 07:02
  • I already knew that every $2\times2$ matrix with complex eigenvalues is similar to a conformal matrix. What I should’ve written is that this answer is incomplete. You’ve explained what $C$ should be, but not how to find an appropriate $P$. – amd Apr 17 '18 at 07:12
  • You are quite right. I assumed - out of laziness, I must say - that once $C$ is found, finding $P$ is quite straightforward. – Bill O'Haran Apr 17 '18 at 07:46
2

To find $P$ and $C$, note that

$$A = PCP^{-1}\iff AP=PC$$

since A and C are similar we have that

  • $Tr(A)=Tr(C) \implies 2a=8 \implies a=4$
  • $\det(A)=\det(C) \implies a^2+b^2=17 \implies b=\pm1$

then let $P=[v_1\, v_2]$ and we have

  • $Av_1=av_1+bv_2$
  • $Av_2=-bv_1+av_2$

and with $v_1=(x,y)\quad v_2=(z,w)$ we have for $b=1$

  • $5x-2y=4x+z\implies x-2y-z=0$
  • $x+3y=4y+w\implies x-y-w=0$
  • $5z-2w=-x+4z\implies x+z-2w=0$
  • $z+3w=-y+4w\implies y+z-w=0$

and we find $v_1=(2,1)\, v_2=(0,1)$ and finally

$$C=\begin{pmatrix}4 & -1\\1 & 4\end{pmatrix}\quad P=\begin{pmatrix}2 & 0\\1 & 1\end{pmatrix}\quad P^{-1}=\begin{pmatrix}\frac12 & 0\\-\frac12 & 1\end{pmatrix}$$

user
  • 154,566
  • 1
    Shouldn’t that be $AP=PC$? – amd Apr 16 '18 at 22:47
  • @amd Oh yes of course! The system should be indeed coherent with this assumption. I fix the typo, Thanks! – user Apr 16 '18 at 22:51
  • @amd I've used a brute force direct method with an ugly determinant that I've solved by wolfram. Is there some more elegant or effective way? – user Apr 16 '18 at 22:53
2

Per this question, $A$ has eigenvalues $4\pm i$, so it is similar to a matrix of the form $$C=\begin{bmatrix}4&-1\\1&4\end{bmatrix}.$$ This answer shows how to construct an appropriate basis without computing any eigenvalues explicitly. Note that the resulting matrix in the question has the opposite signs from what we want on the $\beta$’s in our matrix, but we can flip the signs by taking $\mathbf v_2=\frac1\beta B\mathbf v_1$.

Following this method, we have $$B = \begin{bmatrix}5&-2\\1&3\end{bmatrix} - \begin{bmatrix}4&0\\0&4\end{bmatrix} = \begin{bmatrix}1&-2\\1&-1\end{bmatrix}.$$ Taking $\mathbf v_1=(1,0)^T$, we have $\mathbf v_2=(1,1)^T$, therefore $$P = \begin{bmatrix}1&1\\0&1\end{bmatrix}, P^{-1}=\begin{bmatrix}1&-1\\0&1\end{bmatrix}.$$

Since you’ve gone to the trouble of finding eigenvectors of $A$, the second part of the linked question suggests another way to find $P$. Since $A$ is real, then for any complex vector $\mathbf v$, $$A(\Re\mathbf v) = \frac12A(\mathbf v+\bar{\mathbf v}) = \frac12(A\mathbf v+A\bar{\mathbf v}) = \frac12(A\mathbf v+\overline{A\mathbf v}) = \Re(A\mathbf v)$$ and similarly $A(\Im\mathbf v)=\Im(A\mathbf v)$. Let $\mathbf v_r$ and $\mathbf v_i$ be linearly independent real vectors such that $\mathbf v_r+i\mathbf v_i$ is an eigenvector of $\alpha-i\beta$. (Proving that this is always possible is a relatively simple, but useful exercise.) Then $$A\mathbf v_r = \Re[(\alpha-i\beta)(\mathbf v_r+i\mathbf v_i)] = \alpha\mathbf v_r+\beta\mathbf v_i$$ and $$A\mathbf v_i = \Im[(\alpha-i\beta)(\mathbf v_r+i\mathbf v_i)] = \alpha\mathbf v_i-\beta\mathbf v_r.$$ Setting $P=\begin{bmatrix}\mathbf v_r&\mathbf v_i\end{bmatrix}$ and $J=\small{\begin{bmatrix}0&-1\\1&0\end{bmatrix}}$, we can write these as $$\begin{align}A\mathbf v_r &= P(\alpha I+\beta J)P^{-1}\mathbf v_r \\ A\mathbf v_i &= P(\alpha I+\beta J)P^{-1}\mathbf v_i\end{align}$$ and since $\mathbf v_r$ and $\mathbf v_i$ are linearly independent, this holds for all $\mathbf v$, therefore $A=P(\alpha I+\beta J)P^{-1}$. Note that this is consistent with the first method since from the expression for $A\mathbf v_r$ we obtain $\mathbf v_i = \frac1\beta(A-\alpha I)\mathbf v_r$.

For your matrix, you’ve found that $(1-i,1)^T$ is an eigenvector of $4-i$. This splits into real and imaginary parts $(1,1)^T$ and $(-1,0)^T$, respectively, which gives $P=\small{\begin{bmatrix}1&-1\\1&0\end{bmatrix}}$.

amd
  • 53,693