3

I am trying to understand the proof of Cayley-Hamilton Theorem given in Paul Garrett's notes.

We have a finite dimensional vector space $V$ over a field $k$ and are given a linear operator $T\in \mathcal L(V)$.

The proof on pg 431 in the above link starts out as:

The module $V\otimes_k k[x]$ is free of rank $\dim_k(V)$ over $k[x]$. Also, $V$ is also a $k[T]$ module by the action $v\mapsto Tv$. (I understand this much).

Now the next line reads:

So $V\otimes_k k[x]$ is a $k[T]\otimes_k k[x]$ module.

This I do not understand.

What is the general fact at play here?

EDIT: To expand on my (complete) lack of understanding, I do not see how are we giving a $k[T]\otimes_k k[x]$ module structure to $V\otimes_k k[x]$. And of course, I am looking for a 'general principle' at work here.

For example, when we said that $V\otimes_k k[x]$ is a $k[x]$-module of rank $\dim_k(V)$, what we are using is the following: We have a natural injection $i:k\to k[x]$. So we can extend the scalars on the $k$-module $V$ and get a $k[x]$-module $V\otimes_k k[x]$. Since $V\otimes_k k[x]\cong k[x]^n$, where $n=\dim_k(V)$, we also know that the rank of $V\otimes_k k[x]$ as a $k[x]$-module is same as $\dim_k(V)$.

2 Answers2

3

$V$ is a $k[T]$-module.

$V \otimes $ Something is a ($k[T] \otimes$ Something) - module.

  • I guess I am confused because the situation as presented is this: "$V\otimes_k k[x]$ is a $k[T]$-module, thus $V\otimes_k k[x]$ is a $k[T]\otimes_k k[x]$ module", rather than "$V$ is a $k[T]$ module therefore $V\otimes_k k[x]$ is a $k[T]\otimes_k k[x]$-module". – caffeinemachine Aug 12 '15 at 05:40
1

If you have an $N\times N$ matrix $A=[a_{n,m}]$ over a field, then the cofactor expansion of the determinant of $A$ gives you $$ \mbox{adj}(A)A=\mbox{det}(A)I, $$ where $\mbox{adj}(A)$ is the adjunct or adjugate matrix consisting of the cofactors of $A$. Therefore, $$ \mbox{adj}(\lambda I-A)(\lambda I-A)=p(\lambda)I $$ where $p(\lambda)=\mbox{det}(\lambda I - A)$ is the characteristic polynomial of $A$. You can write this as $$ (A_{0}+\lambda A_{1}+\cdots+\lambda^{n-1}A_{n-1})(\lambda I-A)=p(\lambda)I, $$ where $A_n$ are $N\times N$ coefficient matrices. The polynomial $$ Q(\lambda) = A_0+\lambda A_1 + \cdots +\lambda^{n-1}A_{n-1} $$ has coefficient matrices that may or may not commute with $A$. However, the coefficient matrices of $\lambda I -A$ do commute with $A$. Whenever you have polynomials $Q(\lambda)R(\lambda)=S(\lambda)$ and the coefficients of $R$ commute with $A$, then $(QR)|_{A}=Q|_{A}R|_{A}=S|_{A}$, where the evaluation is done on the right, as opposed to being evaluated on the left. In this case, $$ Q|_{A}(\lambda I-A)|_{A}=p(A) \\ Q|_{A} 0 = p(A). $$ That's the basic idea behind it.

Disintegrating By Parts
  • 87,459
  • 5
  • 65
  • 149
  • 1
    Perhaps Garrett's proof is same as the one you have written only in a different clothing, but I do not see it yet. Also, my question as of now is not about the insight behind the proof but rather the local details of the proof. Thanks. – caffeinemachine Aug 12 '15 at 06:10
  • @caffeinemachine : Do you see a connection? – Disintegrating By Parts Aug 12 '15 at 06:11
  • 1
    I do not even know what connection am I supposed to see. Do not take my comment the wrong way. I write this sincerely. – caffeinemachine Aug 12 '15 at 06:23
  • 1
    @caffeinemachine Start with page 15. "Indeed, in light of these remarks, we must clarify what it means to substitute T for x. Incidental to the argument, intrinsic versions of determinant and adjugate (or cofactor) endomorphism are described, in terms of multi-linear algebra." – Disintegrating By Parts Aug 12 '15 at 06:35