36

This question is related to this recent but currently unanswered MO question of Ricky Demer, where it arose as a comment.

Consider the structure $R^n$ consisting of $n\times n$ matrices over the reals $\mathbb{R}$, $n$-dimensional row vectors, column vectors and real scalars, with the ordered field structure on the scalars. Thus, we can add and multiply matrices; we can multiply anything by a scalar; we can multiply matrices by vectors (on the suitable side); and we can add and multiply vectors of the suitable shape.

The corresponding matrix algebra language has four variable sorts - scalars, matrices, row vectors and column vectors - together with the rules for forming terms so that these expressions make sense in any $R^n$. In this language, you can quantify over matrices, vectors and scalars, form equations (and inequalities with the scalars), but you cannot quantify over the dimension. The idea is that an assertion in this language can be interpreted in any dimension, one $R^n$ at a time. You have to make assertions that do not refer to the dimension; the language is making assertions about matrices and vectors in some fixed but unspecified dimension.

My question is whether truth in this real matrix algebra obeys a 0-1 law as the dimension increases, that is:

Question. Is every statement in the matrix algebra language either eventually true or eventually false in $R^n$ for all sufficiently large dimensions $n$?

To give some trivial examples:

  • the statement asserting matrix commutativity $\forall A,B\, AB=BA$ is true in dimension $1$ but false in all higher dimensions.
  • the statement that the dimension is at least 17, or at most 25, or an odd number less than 1000, are all expressible, since you can quantify over enough vectors and the assertions that they are independent or that they span are expressible. The truth values of these statements all stabilize in sufficiently high dimension.
  • the assertion that a particular real number is an eigenvalue for a matrix is expressible.

But it isn't clear to me how one could express, for example, that the dimension is even. (Edit: Gerry and Ryan below have explained how this is easily done.)

In the previous question, Ricky inquired whether there is a decision procedure to determine which assertions of matrix algebra are true for all $n$. For any particular $n$, then Tarski's theorem on the decidability of real-closed fields shows that the theory of the structure $R^n$ is decidable: when $n$ is fixed, we may translate any statement about matrices and vectors into statements about real numbers by talking about the components. (We may also add to the language the functions that map a matrix or vector to the value of any particular entry, as well as $det(A)$ etc.)

If my question here has a positive answer, and the stabilizing bound is computable from the formula, then this would provide an affirmative answer to Ricky's question, since we could just determine truth in a large enough $R^n$.

Lastly, I don't think it will fundamentally change the problem to work in the complex field, since the corresponding structure $C^n$ with complex matrices and vectors is interpretable in $R^n$. For example, I think we could freely refer to complex eigenvalues.


Edit. The real case was quickly dispatched by Gerry and Ryan, below. Let us therefore consider the complex case. So we have for each dimension $n$ the structure $C^n$ with $n\times n$ matrices, row vectors, column vectors and complex scalars. The question is: Does the truth of every statement of matrix algebra stabilize in $C^n$ for sufficiently large $n$?

Ricky proposed that we add Hermitian transpose (conjugation on scalars) to the language. This would would also allow us to refer to the real scalars. If we expand the language so that we are able to define the class of real matrices and vectors, however, then we can still express Gerry's and Ryan's solutions for a negative answer here.


Edit 2. As in the comments, let us say that the truth set of a formula $\phi$ in the language is the set of $n$ for which $\phi$ is true in dimension $n$. These truth sets form a Boolean algebra, closed under finite differences. Which sets of natural numbers are realizable as truth sets? (Note that there are only countably many truth sets.) And how does it depend on the field?

Denis Serre
  • 51,599
  • Perhaps another better tag might be called for. – Joel David Hamkins Aug 02 '10 at 01:08
  • 4
    I don't understand the bit about complex eigenvalues. The statement, for every square matrix there is a nonzero vector and a real number such that (matrix)(vector) = (number)(vector) is true in odd dimensions, not in even. How do you get around that? – Gerry Myerson Aug 02 '10 at 01:12
  • 3
    Is it not the case that there exists a matrix $A$ such that $A^2 = -I$ if and only if $n$ is even? – Ryan Reich Aug 02 '10 at 01:18
  • 3
    It appears that my question is quickly getting a negative answer here. (MO is amazing!) But could you kindly post answers as answers? – Joel David Hamkins Aug 02 '10 at 01:24
  • 1
    What about the complex case, where the complex conjugate of a scalar and the Hermitian conjugate of a vector and a matrix are also allowed (as in the original question by Ricky Demer)? – Tsuyoshi Ito Aug 02 '10 at 01:33
  • When you say Hermitian conjugate of a vector, do you mean it takes row vectors to column vectors and vice versa? – Ryan Reich Aug 02 '10 at 01:59
  • @Ryan: Yes, that is what I meant. – Tsuyoshi Ito Aug 02 '10 at 02:00
  • One could change the question to ask: "for a given statement $\phi$, is there a partition of the set of sufficiently large integers in arithmetic progressions such that the truth value of $S$ is constant in the parts?" :) – Mariano Suárez-Álvarez Aug 02 '10 at 02:36
  • Mariano, yes, that is interesting. Another version along this line would be to say that the truth set of $\phi$ is the set of $n$ for which $\phi$ is true in $R^n$. These truth sets form a Boolean algebra, closed under finite differences. Which sets are truth sets? – Joel David Hamkins Aug 02 '10 at 02:56
  • I wonder if one can write sufficiently fine statements in your language which capture the structure of the Clifford algebra of a positive-definite form on the vector space, to get period 8. – Mariano Suárez-Álvarez Aug 02 '10 at 02:58
  • 1
    Are transposes allowed in the language? If so then even/odd dilemma is expressible over an arbitrary field, because skew-symmetric nonsingular matrices only exist in even dimensions. – Victor Protsak Aug 02 '10 at 03:03
  • 3
    In view of a large number of representation-theoretic constructions giving arithmetic progressions, I'd like to pose an explicit question about more general case: $$ $$ Is there an algebraic structure which has irreducible representations precisely in dimensions $k^2, k\in\mathbb{N}$ s.t. its representation theory is expressible in the given language? – Victor Protsak Aug 02 '10 at 04:00
  • 2
    How sparse can the truth set be? Can you get the powers of two, for example? – Mariano Suárez-Álvarez Aug 02 '10 at 04:57
  • Mariano: if you have a solution in dimension $n$, can't you just take the direct sum of $k$ copies of it, to get a solution in dimension $kn$ for all integers $k$? In the other direction, consider the question: are there $m$ skew-Hermitian matrices, any pair of which anticommute? I believe that the minimum dimension in which these exist is $2^{(m−1)2}$ , so this set of around $m^2$ equations needs dimension $2^{O(m)}$ to be realized. [For those who want real rather than complex matrices, the minimum dimension of the solution for the same question is a somewhat larger power of 2.] – Peter Shor Aug 02 '10 at 11:43
  • To ask another question (related to my previous remark), suppose you have $m$ equations, containing at most $s$ symbols. Can you give some bound $B(m,k)$ on the minimum dimension of a solution? If $B(m,k)$ is computable, this would then answer Demer's question, so it's probably not going to be easy. But can you find a set of questions for which this bound grows faster than exponential? – Peter Shor Aug 02 '10 at 11:49
  • @Peter: I meant sparseness is an asymtotic sense of some kind (the squares are more sparse than the cubes, and the cubes are more sparse than the powers of two, and the numbers $2^{2^k}$ are even more sparse; but twice the cubes are not more sparse than the cubes, not the cubes larger than $10^{1000000}$ are more sparse that the cubes... Hopefully this can be made precise) On the other hand, can you write the condition of existence of $m$ anti-commuting skew-Hermitian matrices in the language Joel is considering? See Joel's answer to Victor's comment on Ryan Reich's second answer. – Mariano Suárez-Álvarez Aug 02 '10 at 11:52
  • Grrr. Quite a few of the "more" in the previous comment should be "less"... – Mariano Suárez-Álvarez Aug 02 '10 at 11:53
  • I just realized that expressing ("is skew-Hermitian") seems to require the Hermitian transpose. – Peter Shor Aug 02 '10 at 11:53
  • But doesn't my construction show that if $d_1$ and $d_2$ are in the truth set, then $d_1+d_2$ is as well. This shows that all truth sets have linear sparseness. – Peter Shor Aug 02 '10 at 11:56
  • Mariano ... never mind, my comment above is wrong; it's only correct if you have just existential quantifiers. – Peter Shor Aug 02 '10 at 12:00
  • 2
    So, who volunteers to write down the details in orderly fashion? :P – Mariano Suárez-Álvarez Aug 02 '10 at 22:49

8 Answers8

11

The irreducible, finite-dimensional complex representations of the Lie algebra $\mathfrak{sl}_2 \oplus \mathfrak{sl}_2$ are all of the form $V \otimes W$, where $V$ and $W$ are irreducible representations of $\mathfrak{sl}_2$; both $V$ and $W$ may have any dimension (and there is a unique representation of each dimension, not that it matters). If we require that neither copy of $\mathfrak{sl}_2$ act trivially, then $\dim(V \otimes W)$ is necessarily a composite integer. In particular, $n$ is prime if and only if $\mathbb{C}^n$ does not admit such a representation of this Lie algebra.

Note that $\mathfrak{sl}_2$ is spanned linearly by three elements with well-known Lie brackets, so a representation of $\mathfrak{sl}_2$ can be given by six matrices and fifteen commutator relations; specifying that one copy is nontrivial is a matter of specifying that one of the pairs of three does not consist of all zero matrices.

Later: Using representations of $\mathfrak{sl}_2$, we can refer to the dimension of a vector space: let $V$ have an irreducible representation of $\mathfrak{sl}_2$, as expressed by operators $e, f, h$ with the usual relations. Then if $ev = 0$, the weight $l$ as in $hv = lv$ uniquely determines the dimension.

Here is an elaboration on the ideas in Mariano's second post and Victor's comments under it and elsewhere, inspired by one of Peter Shor's comments to the question itself.

  • We can say that a vector space $V$ is a direct sum of (a specified number) $k$ subspaces if there are $k$ orthogonal idempotent matrices $P_i = P_i^2$ such that $\sum_{i = 1}^k P_i = I$. Moreover, using this construction we can speak of the subspaces themselves, as the images of the $P_i$.

  • We can say that $V$ is a tensor product of (a specified number) $k$ spaces $W_1, \dots, W_k$ by asking that it admit an irreducible representation of $\mathfrak{sl}_2^{\oplus k}$, say with generators $e_i, f_i, h_i$ in the usual notation. Suppose more generally that we have expressed $V = \bigotimes W_i \oplus V_0$, where $V_0$ has "reference" dimension $n$ as expressed above. Then we can say that the $W_i$ all have dimension equal to that of $V_0$ by testing highest weights in $V$. In summary: given $n$, we can express $n^k$ for any $k$ in terms of representation theory.

  • Let $f \in \mathbb{N}[x_1, \dots, x_r]$ be any polynomial, $f(x) = \sum a_{i_1, \dots, i_r} x_1^{i_1} \dots x_r^{i_r}$; we can say that $N = f(n_1, \dots, n_r)$ if $\mathbb{C}^N$ can be written as a direct sum of subspaces $W_{i_1, \dots, i_r}$, each of which is the direct sum of $a_{i_1, \dots, i_r}$ copies of the tensor product $(\mathbb{C}^{n_1})^{i_1} \otimes \dots \otimes (\mathbb{C}^{n_r})^{i_r}$.

  • Finally, if $f, g \in \mathbb{N}[x_0, x_1, \dots, x_r]$, we can say that $f(n, x_1, \dots, x_r) = g(n, x_1, \dots, x_r)$ is solvable in positive integers $x_1, \dots, x_r$ if we have a vector space $V$ expressible as an above such decomposition for both $f(n, \bullet)$ and $g(n, \bullet)$.

As an example, if we want to compute the diophantine set $S$ defined by $x_0 x_1 + x_2 - x_0^2 x_1$, we ask for a direct sum decomposition of $\mathbb{C}^N$ into subspaces $V_0, W$; of $W$ into a direct sum of $W_1, W_2, U$; and of $U$ into into $V_0 \otimes W_1 \oplus W_2$ and $V_0^{\otimes 2} \otimes W_1$. Then $n = \dim V_0$ is in $S$.

Thus, for any diophantine set $S$, there is a formula in matrix algebra with one free variable, representing a projection matrix onto a subspace of dimension $n$, which has an interpretation in some $\mathbb{C}^N$ if and only if $n \in S$. This is not really the same as showing that $S$ is a "truth set", though.

Ryan Reich
  • 7,173
  • I'm not sure of the etiquette here. Should I add this to my other answer? It is sort of unrelated. – Ryan Reich Aug 02 '10 at 04:28
  • Here is something implicit in the description of the language that I don't understand: are we allowed to write formulas like $(\exists k, a_1,\ldots,a_k): \ldots$ If not, how can irreducibility be expressed in the given language? – Victor Protsak Aug 02 '10 at 04:33
  • @Victor, a module is reducible if there exists two non-zero irthogonal, commuting, idempotent linear maps which add up to the identity and commute with everything. – Mariano Suárez-Álvarez Aug 02 '10 at 04:34
  • Schur's lemma.$ $ – Ryan Reich Aug 02 '10 at 04:34
  • Re etiquette question: your answers seem sufficiently different to warrant two posts. – Victor Protsak Aug 02 '10 at 04:34
  • (1) OK, indecomposability (which for this construction in char=0 is equivalent to irreducibility) is expressible. What about irreducibility, though? (2) I would still like to know still whether the number of terms in a quantified formula can freely vary. – Victor Protsak Aug 02 '10 at 04:42
  • Victor, that assertion is not expressible in the language, since it involves quantifying over natural numbers and sets of vectors. – Joel David Hamkins Aug 02 '10 at 04:45
  • @(1): Irreducible means no stable subspace $W$. Let $W = \mathrm{ker}, A$ for some matrix $A$; then it is stable iff for every matrix $M$ in the representation, $Av = 0$ implies $AMv = 0$. – Ryan Reich Aug 02 '10 at 04:49
  • A formula in first order logic has a fixed number of quantifiers, variables and terms. If you want to quantify over subsets of the domain, even just finite ones, then you would be working in a kind of simple set theory, not first order matrix algebra. But the point here is to restrict the language and discover what kind of dimension phenomenon still arise. – Joel David Hamkins Aug 02 '10 at 04:50
  • We can express simplicity (which I guess is what Victor means by irreducibility): the module $V$ is not simple if there exists a non-zero, non-surjective $v:V\to V$ such that for each one of the matrices $a:V\to V$$ giving the structure, we have that for all $v\in V$ there exists a $w\in V$ such that $a(f(v))=f(w)$. – Mariano Suárez-Álvarez Aug 02 '10 at 04:50
  • Thank you for clarification, Joel! For a non-logician like me, it would be helpful to mention "first order logic" in the body of the question explicitly. – Victor Protsak Aug 02 '10 at 05:02
  • Ryan, I am getting a strong impression that while mentioning Mariano's and Peter's contributions, you deliberately avoid giving me any credit for important comments on this question. I am not sure whether my representation-theoretic formulation was helpful to your initial answer, but most of the latest addition consists of a write-up of my comments to Peter's answer and development of my idea mentioned in Mariano's 2nd answer. This is very unprofessional of you. – Victor Protsak Aug 02 '10 at 17:27
  • Victor, I'm sorry I omitted you. It was not intentional. I will edit my post. – Ryan Reich Aug 02 '10 at 17:51
  • Also, the similarity between this and Peter's answer is coincidental: once you had written your comments to Mariano's answer, apparently we both followed the same path independently. The edit in which I introduced this latest block was completed only ten minutes after Peter's answer and you had not written your comments by that time, nor did I see it until I wrote my comment below yours. Your contributions in general to this discussion have been central and have deepened it tremendously. This includes an influence on all aspects of the above answer. I did not intend to copy anyone. – Ryan Reich Aug 02 '10 at 18:06
  • Ryan: nice answer. Responding to your last sentence (see the comments for my answer as well), you can't get all diophantine sets as truth sets, since truth sets have to be recursive (from Tarski's theorem) while diophantine sets need only be recursively enumerable. – Peter Shor 8 mins ago – Peter Shor Aug 02 '10 at 21:36
  • Thanks! You are saying that the formula I provide allows for recursively enumerating a diophantine set $S$, but if there were actually a sentence whose truth actually established membership in $S$, then $S$ would be computable, which it is not (always)? – Ryan Reich Aug 02 '10 at 21:38
  • That's right. In fact, you can't even get all computable sets -- any set whose computational complexity is larger than what Tarski's theorem gives you can't be a truth set. – Peter Shor Aug 02 '10 at 22:07
  • There is a representation of $\mathfrak{sl}_2\times\mathfrak{sl}_2$ on $\mathbb{C}^3$ such that neither factor acts trivially. They just act the same. To fix this, additionally require that there is a vector in the span of $Lv$ that isn't in the span of $Rv$ for every nonzero $v$, and vice versa. – NoLongerBreathedIn Mar 04 '22 at 15:45
  • I just realized — if you do that, that automatically implies nontriviality of both irreps. – NoLongerBreathedIn Mar 04 '22 at 15:51
9

One can get arithmetic progressions as truth sets, as in Joel's comment. Pick non-negative integers $a$ and $b$, pick a finite group $G$ which has at least one representation of degree $a$. Then there is a formula expression the statement "the vector space is a $G$-module which is a sum of irreducible representations of degree $a$ and exactly $b$ trivial summands".

Later: For example, the irreps of $G=(\mathbb Z_3\times\mathbb Z_3)\rtimes\mathbb Z_3$ have degree 1 and 3. It is generated by two elements which have cube equal to the identity, and which commute with their commutator. For example, if we want dimensions to be divisible by $3$, we can say:

$(\exists A,B)(A^3=B^3=[A,[A,B]]=[B,[A,B]]=I \wedge \neg(\exists v,\lambda,\mu)(Av=\lambda v\wedge Bv=\mu v))$

(uppercase letters are matrices, lowercase letters are vectors, greek letters are scalars, and commutators are group commutators) A model for this is a $G$ which does not have one-dimensional submodules. This works for other prime values of $3$.

Later: A vector space $V$ has a structure of $M_n(k)$-module iff $n\mid\dim V$. This can also be written in the language and it is much simpler that the first example!

  • I wonder if this isn't a more lowbrow way to achieve the same end; let $\omega$ be a primitive $n$th root of unity, and consider the statement, there are invertible matrices $A$ and $B$ such that $A^{-1}BA=\omega B$. This statement seems to be true if and only if the dimension is a multiple of $n$. – Gerry Myerson Aug 02 '10 at 03:24
  • @Gerry: you'd have to express the condition "$\omega$ is an $n$th root of unity without writing $n$. – Mariano Suárez-Álvarez Aug 02 '10 at 03:26
  • 1
    @Mariano: but you would also have to produce a group with an irreducible representation of dimension $a$, for every $a$ you want to use. That is, for any $a$ you would have to use a different sentence, and Gerry can certainly write individual sentences expressing that $\omega$ is an $n$'th root of unity for particular $n$. – Ryan Reich Aug 02 '10 at 03:37
  • @Ryan, you are right. – Mariano Suárez-Álvarez Aug 02 '10 at 03:48
  • @Ryan, thank you for clarifying my intent. – Gerry Myerson Aug 02 '10 at 03:50
  • I was thinking in the direction Gerry indicated, but that requires existence of roots of unity in the base field, whereas both Mariano's initial construction and matrix algebra construction do not (i.e. they work over an arbitrary field). – Victor Protsak Aug 02 '10 at 03:53
  • @Victor, the problem was explicitly stated over ${\bf C}$, but I take the point that constructions that work over arbitrary fields are better. Here's an attempt to go more general while staying lowbrow. Given $n$, suppose there is something with minimal polynomial $f$ of degree $n$ over your field. Then there is an invertible matrix $A$ satisfying $f(A)=0$ if and only if we're in dimension divisible by $n$. – Gerry Myerson Aug 02 '10 at 06:20
8

As per my comment: you can definitely decide whether $n$ is even or odd, since it is if and only if $n$ is even that $A^2 = -I$ has a solution in an $n \times n$ real matrix $A$.

Here is how you can detect even complex dimension. If we have Hermitian conjugation, we can define a Hermitian matrix to be one $H$ such that $H^* = H$; any one is diagonalizable (with real eigenvalues, not that it matters). One can say that $H$ has distinct eigenvalues: if $Hv = \lambda v$ and $Hw = \lambda w$, then $v = \mu w$ for some $\mu$. Then $n$ is even if and only if there is a Hermitian matrix $H$ with distinct eigenvalues and a matrix $A$ such that for every eigenvector $v$ of $H$ we have an eigenvector $w$, with different eigenvalue, such that $Av = w$ and $Aw = -v$. (This describes $A$ as having the matrix $\bigl(\vcenter{\overset{\begin{smallmatrix} 0 & \;-I \end{smallmatrix}}{\begin{smallmatrix} I & \;\hphantom{-}0 \end{smallmatrix}}}\bigr)$, written in the eigenbasis of $H$.)

Ryan Reich
  • 7,173
  • I don't follow your remarks about the basis. You can't quantify over sets of vectors, but just over vectors (and matrices and scalars). – Joel David Hamkins Aug 02 '10 at 03:59
  • I only need one basis, so I don't have to quantify over sets of vectors. That is, $e_1, \dots, e_n$ is a real basis if for all real scalars $a_1, \dots, a_n$ we don't have $\sum a_j e_j = 0$, and if for all vectors $v$ we have real scalars $a_j, b_j$ such that $\sum a_j e_j + \sum i b_j e_j = v$. Then a matrix $A$ is "real" if each $A e_j$ is in the real span of the $e_j$ and $n$ is even iff there exists a real $A$ with $A^2 e_j = -e_j$ for each vector in the real basis. – Ryan Reich Aug 02 '10 at 04:23
  • But what I don't see is that your basis statement is expressible in the language of matrix algebra independently of the dimension. For any fixed n it seems fine, but what we need is one statement in that language, whose truth varies as n increases. – Joel David Hamkins Aug 02 '10 at 04:35
6

Fix an integer $n$. The dimension of a vector space $V$ is divisible by $n$ iff $V$ can be given the structure of a representation of the discrete Heisenberg group $H_n$ with central charge $1$. This is the Stone-von Neumann theorem. The multiplication table of $H_n$ is a finite length statement in our language, which is true in $n \mathbb Z$ and false otherwise.

thel
  • 1,126
6

One way to get the squares, as Victor asked in a comment, is the following: a simple module over $\mathfrak{sl}_2\oplus\mathfrak{sl}_2$ is of the form $V_n\otimes V_m$ (where $V_n$ is the $\mathfrak{sl}_2$-module of dimension $n+1$), and this has a submodule (for $\mathfrak{sl}_2$ acting diagonally) of dimension one exactly when $n=m$.

  • Excellent, Mariano! I was nearly there. – Victor Protsak Aug 02 '10 at 04:48
  • Damn, I missed it. Ironically, this is why I used the construction I did. – Ryan Reich Aug 02 '10 at 05:06
  • By increasing the number of factors, we can get $f(\mathbb{N})$ for any monic polynomial $f$ with integer coefficients as the truth set. I can almost see how to get any diophantine set ($\iff$ recursively enumerable, by Matiyasevich) in this way. – Victor Protsak Aug 02 '10 at 05:12
  • How do you do the cubes, for example? – Mariano Suárez-Álvarez Aug 02 '10 at 05:20
  • 1
    Mariano: for cubes, consider reps of $\mathfrak{sl}_2^{\oplus 3}$ whose restriction on the diagonal $\mathfrak{sl}_2$ in each pair of factors contains a trivial submodule (pairs 12 and 23 are sufficient). By replacing "trivial mod" with "$m$-dimensional simple mod", you can get off the pure powers. Basically, you can first get dim $n_1\ldots n_k$ from the direct sum of $k$ copies of $\mathfrak{sl}_2$ and then specialize $n_i$ or the difference $n_i-n_j,$ etc, to a chosen natural number. (That only produces pols that split over $\mathbb{Z}$, I'm not sure how to tweak it to get the rest.) – Victor Protsak Aug 02 '10 at 06:18
6

Here's an idea. I think it works, but it should be checked by people who understand representation theory better than I do. It's inspired by the reference given in Ito's answer to Ricky Demer's question, but since I don't really understand the reference, I can't tell whether it's the same construction or not.

We can express the statement (for fixed $k$): there are $k$ projection matrices $P_1$, $P_2$, $\ldots$, $P_k$ so that $\sum_j P_j = I$. Now, suppose we have a polynomial, say $Q(x,y,z) = \sum_{j=1}^k x^{\alpha_j} y^{\beta_j} z^{\gamma_j}$ (here, I'm using 3 variables only to reduce the number of subscripts). I want to claim that we can find a set whose truth set takes on values $x^{\alpha_m} + y^{\beta_m} + z^{\gamma_m} + Q(x,y,z)$ for $x \neq y \neq z$, where $\alpha_m = \max_j \alpha_j$, etc. First, we say our space is the sum of $k+3$ subspaces by finding $k+3$ projection matrices as above. Now, we say the first space has dimension $x^{\alpha_m}$ by representing it as a module over $\mathfrak{sl}_2^{\alpha_m}$, with appropriate submodules acting diagonally, as suggested by Mariano and Victor. We should be able to write down similar equations which show that the $j+3$'rd space has dimension $x^{\alpha_j} y^{\beta_j} z^{\gamma_j}$, for some $x$, $y$, $z$. Now, we want to require that the $x$ appearing in the $j+3$'rd space is the same as the $x$ appearing in the first space. I want to do this by saying that there's a subspace of the $j+3$'rd space which is also a module over $\mathfrak{sl}_2^{\alpha_j}$, and that there's an involution between the first subspace and this subspace of the $j+3$'rd space which preserves this module structure.

I think this will work ... could people check it?

If it does work, it shows the question is undecidable, because we can use the same structure to get a diophantine equation ... keep the first three projection matrices the same, find new projection matrices for the rest of the space, and write down equations which give a different polynomial.

UPDATE 2:

I misunderstood Victor's question. I'll leave the comments I wrote anyway.

(1) I imposed the condition $x\neq y \neq z$ because I was worried that if $x=y$, you could somehow use a space of dimension $x^{\max(\alpha_j, \beta_j)}$ rather than $x^{\alpha_j + \beta_j}$. But I think I was being stupid.

(2) A term $k x^\alpha y^\beta$ can be composed by adding $k$ terms, each one being $x^\alpha y^\beta$. Is this what you were asking?

(3) The first three spaces was for showing the problem is undecideable. We have two polynomials with coefficients in $Z^+$, $Q_1(x_1 \ldots x_3)$ and $Q_2(x_1 \ldots, x_3)$, and we want to know if there is a solution to $Q_1 = Q_2$ in the positive integers. Now, we do the above construction twice, with completely new variables except for the first three projectors $P_1$, $P_2$, and $P_3$. We use to these make sure the $x$ we substitute in $Q_1$ are equal to the x we substitute in $Q_2$. On further thought, we don't need these, either.

Peter Shor
  • 6,272
  • Yes, this basically works, but I don't understand the need for $x\ne y\ne z$ and the first 3 terms. Given any $k$ rings $R_1,\ldots,R_k$, you can form an idempotented ring $R=Re_1\oplus\ldots Re_k.$ Then an $R$-mod is a direct sum of $R_i$-mod for different $i.$ Now take $R_i$ to be a direct sum of several copies of $\mathfrak{sl}2$ and by imposing conditions on restrictions to diag emb $\mathfrak{sl}_2$ as in my comment to Mariano's 2nd answer, you can get any polynomial in many variables with coeff in $Z+.$ But I think that for diophantine, you need more: $Z_+$-values of pols w/coeff in Z. – Victor Protsak Aug 02 '10 at 16:07
  • I have addressed this in an edit to my second answer. – Ryan Reich Aug 02 '10 at 16:22
  • Peter, here is what I was getting at: as you vary $f\in Z[x]$ over all polynomials with integer coefficients in any number of variables, their "image sets" $f(Z_+^n)\cap Z_+$ $\equiv$ diophantine subsets of $Z_+$ $\equiv$ recursively enumerable subsets of $Z_+$. I think that every image set is a truth value set. Your construction with idempotents shows how to get the image set of any sum of monomial terms (my embellishment of Mariano's construction showed how to get the terms themselves), i.e. how to realize the image set of any $f\in Z_+[x]$ as a truth set. Q: How to do it for $f\in Z[x]?$ – Victor Protsak Aug 02 '10 at 20:33
  • I think I see what you're getting at. You're asking whether all diophantine subsets of $Z_+$ are realizable as truth value sets? I'm not sure they are. Let's think of the diophantine subsets as those outputs which can be generated by a Turing machine $T$ for some input. Now, consider the function $f(k) = $ smallest input for which $T$ outputs $k$. Suppose there is a recursively enumerable set for which $f(k)$ grows faster than any computable function. Then the machinery to output $k$ must be contained in a space of dimension $k$. But doesn't Tarski's theorem say this is impossible? – Peter Shor Aug 02 '10 at 21:09
  • 2
    Put more simply (I should learn to think before posting) truth value sets are recursive, since you can use Tarski's theorem to tell whether a number $k$ is in them. Diophantine sets need not be recursive, just recursively enumerable. However, even though we can't get all diophantine sets, the question of whether the truth value set for a statement is empty is still undecideable. – Peter Shor Aug 02 '10 at 21:22
  • Does Tarski's theorem really apply? Formulas have to have a fixed number of variables, and Tarski's theorem involves only quantification of reals, not matrices... – Mariano Suárez-Álvarez Aug 03 '10 at 06:33
  • If you are given a sentence φ and a number k and want to decide whether φ is true in ℝ^k, you can rewrite φ to an equivalent sentence which only involves scalar variables, not vector or matrix variables, and therefore Tarski’s theorem applies. Therefore, the truth set of φ must be recursive. – Tsuyoshi Ito Aug 03 '10 at 11:02
  • But the number of variables depends on $k$, and that is not allowed. – Mariano Suárez-Álvarez Aug 03 '10 at 19:19
  • The truth set of $\phi$ can be thought of as a function from $\mathbb{Z}$ to ${0,1}$. For any fixed $k$, you can use Tarski's theorem to compute whether $\phi$ holds in $\mathbb{R}^k$. This means that the truth set is a computable function. Since some diophantine sets are recursively enumerable but not computable functions, we see that not all diophantine sets can be realized as truth sets. (In fact, not all computable functions can be realized as truth sets, since Tarski's theorem gives an upper bound on the computational complexity of any truth set.) – Peter Shor Aug 03 '10 at 21:09
4

Over an arbitrary field, you can decide whether $n$ is even or odd by testing whether there exists a matrix $A$ such that $\pm 1$ are not its eigenvalues and $A$ is conjugate to $A^{-1}$ (yes for even, no for odd).

3

What about the following easy-looking statement:

for every $n\times n$ matrix, one has $\det(A^T-A)=0$,

which is true if $n$ is odd, but false if $n$ is even ?

Denis Serre
  • 51,599