2

One way to state what I am calling the theorem of dimensionality is: every vector space of finite dimension has a linearly independent spanning basis set with the number of elements equal to the dimension of the vector space. Any set of fewer vectors will not span the vector space, and any set of more vectors than are in the spanning basis will be linearly dependent.

In Weyl's The Classical Groups: Their invariants and Representations, he states the following:

Lemma (1.1.A). (Principle of the irrelevance of algebraic inequalities.) A $k-\text{polynomial}$ $F\left(x,y,\dots\right)$ vanishes identically if it vanishes numerically for all sets of rational values $x=\alpha,y=\beta,\dots$ subject to a number of algebraic inequalities

$$ R_{1}\left(\alpha,\beta,\dots\right)\ne0,R_{2}\left(\alpha,\beta,\dots\right)\ne0,\dots. $$

Does this amount to a theorem of dimensionality for vector spaces of multi-variable polynomials?

The identical vanishing of $F$ will require all coefficients $a_0=0,a_1=0,\dots,b_0=0,b_1=0\dots,\dots$. This is the same condition we use to define linear dependence, and thereby linear independence in vector spaces. I, therefore, believe it is possible to transform Weyl's lemma into a statement of vector space dimensionality. Given the number of arguments $r$, and the formal degree $n$ of $F$, the number of dimensions will be $r+r^2+\dots+r^n$.

My original post included the following proposition, which I found to be in need of refinement: Apparently the multinomial expansions of $\left(\alpha+\beta+\dots\right)^{i};i=1,\dots,n$ form a basis for the vector space of polynomials of degree $n,$ in the variables $\alpha,\beta,\dots$. If we use

$$ R_{i}\left(\alpha,\beta,\dots\right)=\left(\alpha+\beta+\dots\right)^{i}\ne0 $$

as the inequalities indicated in the statement of lemma (1.1.A), then for $FR_{1}\dots R_{n}$ to vanish identically would require $F$ to vanish identically.

In order for this to work, polynomials with $a_0\ne0$ would have to be excluded. This is similar to the distinction between the set of general affine transformations, and that of centered affine transformations. This restriction doesn't significantly impact Weyl's lemma since all such polynomials do not vanish identically.

After some thought, I've come to believe my proposed $R_i$ will not satisfy the requirements of Weyl's lemma.

I believe it would work if the domain of variables and coefficients were restricted to non-negative values. The problem with my proposition that occurred to me is, for example, in the case of the first degree binomial with $R\left(\alpha,\beta\right)=a \alpha + b \beta$. This would vanish for $a \alpha = -b \beta$. One reading of Weyl's lemma would permit us to 'patch' $R$ for the set of values with $\alpha\beta\ne0\land{R\left(\alpha,\beta\right)=0}$ That is, we might read the lemma to say that for every set of values of the arguments there are inequalities $R_1\ne0,R_2\ne0,\dots,$ but $R_1\ne0,R_2,\dots$ need not be the same formal expression for each set of argument values.

  • 4
    I don't know what you mean by 'a theorem of dimensionality." In slightly more modern language, Weyl is saying that the complement of a finite number of Zariski closed sets is Zariski dense. – Qiaochu Yuan Oct 08 '19 at 00:06
  • One way to state the theorem of dimensionality is: every vector space of finite dimension has a linearly independent spanning basis set with the number of elements equal to the dimension of the vector space. Any set of fewer vectors will not span the vector space, and any set of more vectors than are in the spanning basis will be linearly dependent. – Steven Thomas Hatton Oct 08 '19 at 00:39
  • 1
    I don't understand what connection you're trying to make between vector space dimensions and this theorem of Weyl. Can you say more? – Qiaochu Yuan Oct 08 '19 at 03:27
  • I is apparent to me that multivariable polynomials of finite degree $n$ form an $n$ dimensional vector space in the same way as single variable polynomials do. See Edwards, Advanced Calculus of Several Variables. Example 6, Chapter 1. It is also evident that the multinomial expansions will form a basis for the space. I probably need to add that the multinomials should have fixed coefficients $a_i$. We might consider the multinomial expansions with $a_i=1$ to be the "natural basis". These become the $R_i$ in Weyl's lemma. I'm posting from my phone, and may not be able to spell this out – Steven Thomas Hatton Oct 08 '19 at 04:42
  • I have to apologise for my overly cryptic notation. I've replaced $,\mapsto{+}$ in the original post. – Steven Thomas Hatton Oct 08 '19 at 04:53
  • No, they are two different things – orangeskid Oct 08 '19 at 05:28
  • @Orest Bucicovschi I believe that the identical vanishing of $F$ will require the existence of an equivalent form in which all of the $a_i=0$. That in turn can be expressed as a linear combination of the individual terms in the multinomial expansions of the sum of the argument Variables. I believe it can be shown that the set of these terms is linearly independent. Do you agree with this? – Steven Thomas Hatton Oct 08 '19 at 08:48
  • I don't understand what you are doing in the last paragraph. ${R\left(\alpha,\beta\right)=0}$ is not an inequality; it is an equality. – darij grinberg Oct 10 '19 at 18:44
  • Weyl's theorem is nowadays understood as a corollary of the following two facts: Fact 1: A polynomial over an infinite field vanishes identically if it vanishes numerically for all sets of values in the field. (In your case, the field is $\mathbb{Q}$.) Fact 2: Any ring of polynomials over an integral domain is an integral domain. How do these two facts imply Weyl's theorem? Well, under the assumptions of Weyl's theorem, the product $F R_1 R_2 \cdots$ is a polynomial that vanishes numerically for all sets of values in $\mathbb{Q}$. Hence, by Fact 1, ... – darij grinberg Oct 10 '19 at 18:46
  • ... this product vanishes identically. Since $R_1, R_2, \ldots$ are nonzero as polynomials, we thus can use Fact 2 to conclude that $F$ vanishes identically. Done. Note that this relies crucially on the tacit assumption that there are only finitely many polynomials $R_1, R_2, \ldots$, and that none of them vanishes identically. – darij grinberg Oct 10 '19 at 18:47
  • Fact 1 can, indeed, be viewed through the lens of linear algebra. In fact, it says that the linear map that sends each polynomial to its full "list" of values at all rational points (of course, it is not so much a "list" as a family indexed by the rational point) is injective. Since this map is linear, you can restate its injectivity as a linear independence of its values at monomials. – darij grinberg Oct 10 '19 at 18:49
  • @darijgrinberg If you post that as an answer, I will be happy to accept it. – Steven Thomas Hatton Oct 10 '19 at 18:56
  • Thing is, I don't know how well this answers your question... – darij grinberg Oct 10 '19 at 19:10
  • @darijgrinberg It looks pretty much like what I finally arrived at after giving it some thought. When I originally posted, it just "smelled" like a statement of linear independence. The dimension of the space generated by the monomial terms of a generic polynomial of formal degree $n$ will be equal to the number of such terms. – Steven Thomas Hatton Oct 10 '19 at 19:23

1 Answers1

2

This answer is a polished version of what I wrote in the comments.

Let me first restate Weyl's theorem in a modern language. We fix an infinite field $\mathbb{K}$ (for example, $\mathbb{Q}$). Its elements will be called scalars.

Theorem 1 (Weyl's principle of irrelevance of algebraic inequalities). Let $n$ and $m$ be nonnegative integers. Let $\mathcal{P}$ be the polynomial ring $\mathbb{K}\left[ x_{1},x_{2},\ldots,x_{n}\right] $. Let $F,R_{1} ,R_{2},\ldots,R_{m}\in\mathcal{P}$ be polynomials such that $R_{1} ,R_{2},\ldots,R_{m}$ are nonzero. Assume that every $n$-tuple $\left( a_{1},a_{2},\ldots,a_{n}\right) \in\mathbb{K}^{n}$ of scalars that satisfies \begin{equation} R_{i}\left( a_{1},a_{2},\ldots,a_{n}\right) \neq0\qquad\text{for all } i\in\left\{ 1,2,\ldots,m\right\} \label{darij1.eq.t1.1} \tag{1} \end{equation} also satisfies \begin{equation} F\left( a_{1},a_{2},\ldots,a_{n}\right) =0. \label{darij1.eq.t1.2} \tag{2} \end{equation} Then, $F=0$.

We shall derive this from the following two known facts:

Theorem 2. Let $n$ be a nonnegative integer. Let $\mathcal{P}$ be the polynomial ring $\mathbb{K}\left[ x_{1},x_{2},\ldots,x_{n}\right] $. Then, $\mathcal{P}$ is an integral domain.

Theorem 3. Let $n$ be a nonnegative integer. Let $\mathcal{P}$ be the polynomial ring $\mathbb{K}\left[ x_{1},x_{2},\ldots,x_{n}\right] $. Let $G\in\mathcal{P}$ be nonzero. Then, there exist $n$ scalars $a_{1} ,a_{2},\ldots,a_{n}\in\mathbb{K}$ such that \begin{equation} G\left( a_{1},a_{2},\ldots,a_{n}\right) \neq0. \label{darij1.eq.t3.1} \tag{3} \end{equation}

Theorem 2 is a particular case of the known fact that any polynomial ring over an integral domain must itself be an integral domain. See, e.g., math.stackexchange question #2187381 for a proof in the case of univariate polynomial rings; but the general case of multivariate polynomial rings can be reduced to this case by induction (just adjoin the $n$ indeterminates one by one).

Theorem 3 is a well-known fact that is often stated (somewhat imprecisely, but "morally right") in the form "$\mathbb{K}^{n}$ is Zariski-dense in $\mathbb{K}^{n}$"; it is the reason why the old-fashioned habit of identifying polynomials (defined formally as families of coefficients) with polynomial functions (i.e., functions from $\mathbb{K}^{n}$ to $\mathbb{K}$ that are given by a polynomial formula) is harmless (when $\mathbb{K}$ is infinite!). (If Theorem 3 was false, then there would exist different polynomials that give rise to the same polynomial function, and thus we could not identify the former with the latter. This indeed happens when $\mathbb{K}$ is finite; for example, the univariate polynomials $x^{2}-x$ and $0$ over $\mathbb{F}_{2}$ are distinct, but the corresponding functions from $\mathbb{F}_{2}$ to $\mathbb{F}_{2}$ are identical.)

Since you seem to be interested in finite-dimensional structures, let me give a nonstandard proof of Theorem 3 (more precisely, a reference):

Proof of Theorem 3. The polynomial $G$ has at least one nonzero coefficient (since it is nonzero). Let us pick such a coefficient of largest possible degree. Let this be the coefficient before $x_{1}^{t_{1}}x_{2}^{t_{2}}\cdots x_{n}^{t_{n}}$. Then, $\deg G=t_{1}+t_{2}+\cdots+t_{n}$. Note that $t_{1},t_{2},\ldots,t_{n}$ are finite numbers, while $\mathbb{K}$ is an infinite field; thus, $\left\vert \mathbb{K}\right\vert >t_{i}$ for each $i\in\left\{ 1,2,\ldots,n\right\} $. Hence, the Combinatorial Nullstellensatz (Theorem 1.2 in Noga Alon, Combinatorial Nullstellensatz) (applied to $F=\mathbb{K}$ and $f=G$ and $S_{i}=\mathbb{K}$) yields that there are $s_{1}\in\mathbb{K}$, $s_{2}\in\mathbb{K}$, $\ldots$, $s_{n}\in\mathbb{K}$ such that $G\left( s_{1},s_{2},\ldots,s_{n}\right) \neq0$. Consider these $s_{1},s_{2},\ldots,s_{n}$. Hence, there exist $n$ scalars $a_{1},a_{2} ,\ldots,a_{n}\in\mathbb{K}$ such that $G\left( a_{1},a_{2},\ldots ,a_{n}\right) \neq0$ (namely, $a_{i}=s_{i}$). Thus, Theorem 3 is proven. $\blacksquare$

Now Theorem 1 is a stone's throw away:

Proof of Theorem 1. Assume the contrary. Thus, $F\neq0$. Hence, we know that the polynomials $F,R_{1},R_{2},\ldots,R_{m}$ are nonzero (since we already know that $R_{1},R_{2},\ldots,R_{m}$ are nonzero). Thus, their product $FR_{1}R_{2}\cdots R_{m}$ is nonzero as well (since Theorem 2 shows that $\mathcal{P}$ is an integral domain). Hence, Theorem 3 (applied to $G=FR_{1}R_{2}\cdots R_{m}$) shows that there exist $n$ scalars $a_{1} ,a_{2},\ldots,a_{n}\in\mathbb{K}$ such that \begin{align*} \left( FR_{1}R_{2}\cdots R_{m}\right) \left( a_{1},a_{2},\ldots ,a_{n}\right) \neq0. \end{align*} Consider these $a_{1},a_{2},\ldots,a_{n}$. Now, \begin{align} & F\left( a_{1},a_{2},\ldots,a_{n}\right) \cdot\prod_{i=1}^{m}R_{i}\left( a_{1},a_{2},\ldots,a_{n}\right) \nonumber\\ & =\left( FR_{1}R_{2}\cdots R_{m}\right) \left( a_{1},a_{2},\ldots ,a_{n}\right) \neq0. \label{darij1.pf.t1.0} \tag{4} \end{align} But a product can only be nonzero if all its factors are nonzero. Thus, \eqref{darij1.pf.t1.0} entails \begin{equation} F\left( a_{1},a_{2},\ldots,a_{n}\right) \neq0 \label{darij1.pf.t1.2} \tag{5} \end{equation} and \begin{equation} R_{i}\left( a_{1},a_{2},\ldots,a_{n}\right) \neq0\qquad\text{for all } i\in\left\{ 1,2,\ldots,m\right\} . \label{darij1.pf.t1.3} \tag{6} \end{equation} Thus, \eqref{darij1.eq.t1.2} shows that $F\left( a_{1},a_{2},\ldots ,a_{n}\right) =0$ (since we have \eqref{darij1.pf.t1.3}). But this contradicts \eqref{darij1.pf.t1.2}. This contradiction shows that our assumption was false. $\blacksquare$

Theorem 3 can be reframed as a linear-algebraic statement: Namely, the map \begin{align*} \mathcal{P} & \rightarrow\prod_{\left( a_{1},a_{2},\ldots,a_{n}\right) \in\mathbb{K}^{n}}\mathbb{K},\\ F & \mapsto\left( F\left( a_{1},a_{2},\ldots,a_{n}\right) \right) _{\left( a_{1},a_{2},\ldots,a_{n}\right) \in\mathbb{K}^{n}} \end{align*} that sends each polynomial $F\in\mathcal{P}$ to the family $\left( F\left( a_{1},a_{2},\ldots,a_{n}\right) \right) _{\left( a_{1},a_{2},\ldots ,a_{n}\right) \in\mathbb{K}^{n}}$ of all its values at points in $\mathbb{K}^{n}$ is a $\mathbb{K}$-linear map (and even a $\mathbb{K}$-algebra homomorphism). Theorem 3 states that this $\mathbb{K}$-linear map is injective. This is equivalent to saying that the images of the monomials in $\mathcal{P}$ under this map are $\mathbb{K}$-linearly independent. This viewpoint is occasionally useful, but (to my knowledge) not here. Note that we are talking about an infinite family of monomials, but of course linear independence can be rephrased in finitary terms (just show that finite subfamilies are linearly independent).

  • Please fix the typo in the proof of theorem 1. I'm not sure you meant $\mathcal{F}$. And if you did, then I don't know what you meant. As for wanting to work with finite structures. I've gotta start somewhere. Fully understanding your post may take me some time (i.e.,months). I have 50 Mathematica notebooks which I have created for the review of mathematics pertinent to theoretical physics. If I don't limit the time spent on each topic, I will never complete a review cycle. But I promise you I will return to it. – Steven Thomas Hatton Oct 13 '19 at 02:20
  • 1
    @StevenThomasHatton Thanks for noticing the typo! You won't need much for this argument; if you know why the polynomials form a ring, Theorem 2 is a couple paragraphs to prove, and the Noga Alon paper I cited (link just corrected too!) proves its theorem on page 3 (and is self-contained). – darij grinberg Oct 13 '19 at 02:27
  • I have to confess that I have not expressed my thoughts well in asking my question. I'm not sure we are talking about the same vector spaces. You may be talking about the space dual to mine. There are many obvious errors in these notes. They are a work in progress. See page 9 "Orthogonalize the linearly independent elements $1,x,x^2…,x^n,… $ of the vector space ..." https://drive.google.com/file/d/1UakX3azIEeu-LlpxDzWqDI15Q-6nOiVj/view?usp=drivesdk – Steven Thomas Hatton Oct 13 '19 at 05:38
  • @StevenThomasHatton: I'm not sure why our spaces should be different; we are both talking about polynomials. But the space of polynomials is indeed self-dual in an appropriate sense when the base field $\mathbb{K}$ has characteristic $0$; in fact, the inner product you describe is one way of setting up this self-duality. – darij grinberg Oct 15 '19 at 18:50