I am interested in the case that $A$ is a matrix over a commutative ring, not necessarily a field. Is it still true that if $Ax = b$ has a solution for every $b$, then $A$ is invertible? I know that in the general setting, $A$ having the trivial nullspace does not imply that it is invertible. However, I cannot seem to find a counterexample to the fact in the title of the question, so I am starting to believe it is true. Any ideas how to prove it?
3 Answers
For each standard basis vector $e_1, \dots, e_n$, we have some $b_1, \dots, b_n$ such that $A b_i = e_i$, respectively. Then simply 'squishing' the $b_1, \dots, b_n$ together into a $n \times n$ matrix directly gives $A^{-1}$.
- 2,077
-
2You built a matrix $B$ that satisfies $AB=I$. However, does this imply that also $BA=I$, over an arbitrary commutative ring? – Marc van Leeuwen Oct 22 '16 at 08:34
-
It doesn't directly, but since the determinant of $A$ is a unit in $R$, then $A$ will have a two-sided inverse (given by $(\textrm{det } A)^{-1} \cdot \textrm{adj } A$, which is then necessarily equal to $B$. – D_S Oct 22 '16 at 13:33
-
1@MarcvanLeeuwen The result that, for square matrices $A$ and $B$, $AB=I$ implies $BA=I$ holds over any commutative ring (and one doesn't need determinants for it, but with determinants it's easy). – egreg Oct 22 '16 at 13:38
-
1@egreg That is what my answer says (except for the "one doesn't need determinants" part; I guess that is so, but one does need commutativity, and finite rank, so one might as well use determinants). – Marc van Leeuwen Oct 22 '16 at 17:46
The result is wrong as stated, since by taking for $A$ a rectangular matrix (more columns than rows) one easily gets counterexamples. I will therefore suppose you implicitly assumed $A$ to be square (a necessary condition for being invertible).
This is then a complement to the answer by basket, using a simple argument found in this answer. By the hypothesis you can find a matrix $B$ such that $AB=I$, since this amounts for every column of $B$ to an equation of the form $Ax=c$, where $c$ is the corresponding column of $I$. Now taking determinants we get $\det(A)\det(B)=1$, so the determinant of $A$ is invertible in your commutative ring. Then $A$ as well is invertible in the matrix ring, namely $\det(B)$ times the cofactor matrix of $A$ gives $A^{-1}$.
That was all that was asked for, but multiplying $AB=I$ to the left by $A^{-1}$ shows that in fact $B=A^{-1}$ and hence $BA=I$. Note that commutativity of the base ring (which allowed taking determinants) is essential; the result does not hold over non-commutative rings (even for $1\times1$ matrices, since for a scalar having a right inverse now does not imply having a left inverse).
- 115,048
As user basket wrote there are solution vectors $x_i$ for $$ A x_i = e_i $$ where $e_i$ is the $i$-th canonical basis vector.
Then $X = (x_1 x_2 \dotsb x_n)$ fulfills $A X = I$.
If there exists a $Y$ with $Y A = I$, then we would have $$ Y = Y I = Y (A X) = (Y A) X = I X = X $$
I still lack a simple argument, why there should exist a left-inverse for $A$ as well, or rather why would $X$ work as left-inverse too.
Marc's argument looks nice but would have me to review determinant theory over commutative rings instead of the usual fields. :)
Let us assume $A$ is invertible and $$ X A =: B \ne I $$ Then $$ A B = A (X A) = (A X) A = I A = A $$ and $$ A (B - I) = 0 $$ If $B- I \ne 0$ then at least one of its row vectors is non-zero, let us call it $r$ and we got $\DeclareMathOperator{ker}{ker}r \in \ker A$. For regular linear algebra (over a field) this would prevent $A$ from being invertible. No idea what happens for just commutative rings.
- 34,562
-
1We have commutivity of the ring that $A$ and $X$ are over, but that does not mean by itself that $A$ commutes with $X$. – Paul Sinclair Oct 22 '16 at 14:30
-