6

Let us consider the following property which is a constrained version of $(\star)$ (see Remark below):

$$\begin{align*}\bbox[#EFF,15px,border:2px solid blue] {\begin{aligned}\text{For any n, for any} \ \ a_1,\ldots,a_n \text{with}\\ \sum_{i=1}^{n}a_i=0,\text{and} \ x_1,\ldots,x_n > 0,\\ \ \ \ \ \ \ \ \ \ \ \ \ S:=\sum_{i,j=1}^n a_ia_j\ln(x_i+x_j)\leq 0\end{aligned}}\end{align*}\tag{*}$$

Extensive numerical simulations have convinced me that (*) holds.

  • case $n=2$ is very simple to establish : it is a direct consequence of ordinary Arithmetic-Geometric inequality.

  • case $n=3$ is more complicated. I have found a proof which is interesting in itself (see below) using the weighed version of Arithmetic-Geometric inequality.

Proof for the case $n=3$.

I will use the following notations:

$$\begin{cases}x:=x_1& \ \ y:=x_2& \ \ z:=x_3 \\ a:=a_1& \ \ b:=a_2& \ \ c:=a_3=-a-b,\end{cases}$$

Due to the fact that the LHS of (*), is homogeneous with degree $2$, property (1) is preserved by any change of the form $(a,b,c) \to (ka,kb,kc), \ \ k \in \mathbb{R}$). Therefore, we can assume WLOG that we are in the case where

$$a>0, \ \ \ b>0, \ \ \ c=-a-b<0$$

Property (*) can be written under the following form:

$$\ln[(2x)^{a^2}(2y)^{b^2}(2z)^{c^2}(x+y)^{2ab}(x+z)^{2ac}(y+z)^{2bc}] \leq 0$$

Otherwise said:

$$(2x)^{a^2}(2y)^{b^2}(2z)^{(-a-b)^2}(x+y)^{2ab}(x+z)^{2a(-a-b)}(y+z)^{2b(-a-b)} \leq 1$$

which can be transformed into:

$$\left(\frac{4xz}{(x+z)^2}\right)^{a^2}\left(\frac{4yz}{(y+z)^2}\right)^{b^2}\left(\frac{2z(x+y)}{(x+z)(y+z)}\right)^{2ab} \leq 1\tag{1}$$

An important point here is that (1) is doubly homogeneous, more precisely, is invariant by a homogeneous change of variable of any of the two following kinds:

$$(x,y,z) \to \mu(x,y,z) , \ \ \ \ (a,b) \to \lambda(a,b) \ \text{already used above}$$

This allows us to replace

  • $z$ by $1$.

  • $(a,b)$ by $\left(\frac{a}{a+b}, \frac{b}{a+b}\right)$ resp.

Therefore, by the weighted Arithmetic-Geometric means inequality, (1) will be proven if the (weighted) Arithmetic Mean is itself less than $1$ :

$$\frac{a^2}{(a+b)^2}\left(\frac{4x}{(x+1)^2}\right)+\frac{b^2}{(a+b)^2}\left(\frac{4y}{(y+1)^2}\right)+\frac{2ab}{(a+b)^2}\left(\frac{2(x+y)}{(x+1)(y+1)}\right) \leq 1\tag{2}$$

$$\left(\frac{4a^2x}{(x+1)^2}\right)+\left(\frac{4b^2y}{(y+1)^2}\right)+\left(\frac{4ab(x+y)}{(x+1)(y+1)}\right) \leq a^2+2ab+b^2\tag{3}$$

which is equivalent, by grouping everything on the LHS, to

$$a^2\frac{4x-(x+1)^2}{(x+1)^2}+b^2\frac{4y-(y+1)^2}{(y+1)^2}+2ab\frac{2(x+y)-(x+1)(y+1)}{(x+1)(y+1)} \leq 0$$

Or equivalently:

$$-a^2\frac{(x-1)^2}{(x+1)^2}-b^2\frac{(y-1)^2}{(y+1)^2}-2ab\frac{(x-1)(y-1)}{(x+1)(y+1)} \leq 0$$

giving

$$-\left(a\frac{x-1}{x+1}+b\frac{y-1}{y+1}\right)^2 \leq 0 \tag{4}$$

which is evidently true.

Remark: equality holds in (4) if $x=y=z=1$.


Now, I have two questions:

  • How can relationship (*) be established for a general dimension $n$ (Generalisation of the above proof ? Other approaches ?)

  • Is there some theory behind all that ?


Remark: This question is in fact a follow up of a recent question that has been closed because the claimed property has counter-examples.

The previous question was :

Given $a_1,\ldots,a_n,x_1,\ldots,x_n\in\mathbf{R}$ with $\sum_{i=1}^{n}a_i=0$, prove that $$S=\sum_{i,j}a_ia_j\ln|x_i+x_j|\leq 0 \tag{$\star$}$$

Property $(\star)$ has counter-examples. For example, if $n=3$, taking

$$(a_1,a_2,a_3)=(1,0,-1) \ \ \text{and} \ \ (x_1,x_2,x_3)=(-1,-1,2)$$

gives $S > 0$.

Jean Marie
  • 81,803

3 Answers3

4

Yes, $(\text{*})$ holds; here's a (very short) proof using a Frullani integral: $$\sum_{i,j=1}^n a_i a_j\ln(x_i+x_j)=\sum_{i,j=1}^n a_i a_j\int_0^\infty\frac{e^{-t}-e^{-(x_i+x_j)t}}{t}\,dt=-\int_0^\infty\left(\sum_{j=1}^n a_j e^{-x_j t}\right)^2\frac{dt}{t}\leqslant 0.$$

metamorphy
  • 39,111
  • The idea to use Frullani is very good but I don't understand how you obtain the equality with the last integral (in particular how the fact that $\sum_{j=1}^n a_j=0$ is taken into account). – Jean Marie May 31 '22 at 06:58
  • @JeanMarie: Due to $\sum_{i,j=1}^n a_i a_j=\left(\sum_{j=1}^n a_j\right)^2=0$ the term with $e^{-t}$ vanishes. – metamorphy May 31 '22 at 07:07
  • That's evident ! I should have guessed it ! Thank you very much for this very short and elegant solution. – Jean Marie May 31 '22 at 07:13
  • I just saw an interesting solution on a similar issue using Frullani integral, and discoverd that it was ... yours :) – Jean Marie May 31 '22 at 07:26
1

The Laplace transform is your friend here.

Assume that $\phi\colon [0, \infty)\to \mathbb{R}$ is a function such that it derivative $\phi'(x)$ is the Laplace transform of a positive function

$$\phi'(x) = \int_{0}^\infty e^{-x t} \mu(t) dt $$

Then for every $a_i$, $i=1,n$ with $\sum a_i =0$ and $x_1$, $\ldots$ $x_n \in [0, \infty)$ we have $$\sum \phi(x_i+x_j) a_i a_j \le 0$$

Note that the function $\phi(x)\ \colon x\mapsto \log x$ satisfies the condition: we have $$\phi'(x)= \frac{1}{x} = \int_{0}^{\infty} e^{-x t} d t$$ so $\mu(t) \equiv 1$ here.

Fix a value $\bar x\in [0, \infty)$. It is only of temporary use, and will be discarded at the end. For instance in the case of $\phi(x) = \log x$ we could take $\bar x = 1$.

Let's note that $$\sum \phi(x_i +x_j) a_i a_j = \sum_{i j}(\phi(x_i + x_j) - \phi(\bar x)) a_i a_j$$ since $\sum a_i a_j = (\sum a_i)^2 = 0$.

Now, for every $x$ we can write $$\phi(x) - \phi(\bar x) = \int_{\bar x}^ x \phi'(u) d u= \int_{\bar x}^ x \int_{0}^{\infty} e^{-u t} \mu(t) dt du = \int_{0}^{\infty} \int_{\bar x}^{x} e^{-u t} du \mu(t)dt $$

Now, we have $$\int_{\bar x}^x e^{-u t} du = \frac{e^{- \bar x t} - e^{- x t}}{t}$$ Put together we have $$\phi(x) -\phi(\bar x) = \int_0^{\infty} \frac{e^{- \bar x t} - e^{- x t}}{t} \mu(t) d t$$

This is the fundamental equality that will be used. We write $$\sum \phi(x_i + x_j) a_i a_j= \sum_{i j} ( \phi(x_i + x_j) - \phi(\bar x) ) a_i a_j = \\ \sum_{i j} \left(\int_0^{\infty} \frac{ e^{ \bar x t} - e^{(x_i+x_j) t)} }{t} \mu(t) dt \right) a_i a_j = \int_{0}^{\infty} \frac{ \sum_{i j} (e^{ \bar x t} - e^{(x_i+x_j)) t } )a_i a_j}{t}\mu(t) dt $$ Almost there: recall again that $\sum_{i j} a_i a_j = 0$. Hence for every $t$ the numerator in the above integral equals $\sum_{i j} e^{-(x_i + x_j) t} a_i a_j = - (\sum e^{-x_i} a_i)^2 \le 0$. Therefore we get $$\sum_{i j} \phi(x_i + x_j) a_i a_j = -\int_{0}^{\infty} \frac{(\sum_i e^{-x_i t} a_i)^2}{t} \mu(t) dt \le 0$$

Assume moreover that the $x_i$ are distinct. Moreover assume that the support of the measure $\mu$ is infinite. Then the functions on $[0, \infty)$ $t\mapsto e^{-x_i t}$ are linearly independent on the support of $\mu$. We conclude that the integrand is not $0$ and so we have strict inequality if $a_i$ are not all $0$.

Assume ( like above) that the inequality is strict. The quadratic form $a= (a_1, \ldots, a_n) \mapsto \sum_{ij} \phi(x_i + x_i)a_i a_j $ is negative definite when restricted to the subspace $\sum a_i = 0$. Therefore, the matrix $\phi(x_i+x_j)$ will have at least $n-1$ negative eigenvalues ( by Cauchy interlacing). Now, if we knew that it also has a positive eigenvalue, we would get the signature $(1, -1, -1, \ldots, -1)$. This is the case for instance when one principal minor ( like $\phi(x_i + x_i)$ ) is $>0$.

Now, how do we come up with examples of functions like this? Well if we knew functions $\psi(x) = \phi'(x)$ that are Laplace transforms of positive functions. Now, there exists a whole theory of such functions $\psi$. They are called completely(totally) monotone and there is a whole theory behind them. For instance $\psi(x) = x^{\alpha}$ is completely monotone if $a<0$. From here we get the corresponding $\phi(x)= \frac{x^{\alpha}}{\alpha+1}$ ($\alpha \ne -1$), or $\phi(x) = \log x$ ($\alpha =-1$).

Note: say $\phi(x) = x^{3/2}$. Now its derivative $\psi(x) = 3/2 x^{1/2}$ is Not completely monotone, but the second derivative is. Is there anything we can say about matrices of form $\phi(x_i+x_j)$? Well there is: we can show that they are positive definite on the the subspace $\sum a_i = 0$ and $\sum x_i a_i=0$. One simply does the same trick using $\phi''(x)$. Basically we express a value $\phi(x)$ using the Taylor formula of order $1$, with integral remainder, based at a fixed point $\bar x$.

$\bf{Added:}$

Another solution: We'll show that for $0<\epsilon \le \min(\lambda_i)$ the matrix $$(\log\frac{1}{\lambda_i+ \lambda_j- \epsilon})$$ is positive semidefinite on the subspace $\sum a_i =0$.

Indeed, we have $$\frac{1}{x+y-\epsilon} = \frac{\epsilon}{x y} \frac{1}{1- \frac{(x-\epsilon)(y-\epsilon)}{x y} } $$ so taking the $\log$ gets us $$\log \frac{1}{x+y-\epsilon} = \log \epsilon- \log x - \log y + \log \frac{1}{1- \frac{(x-\epsilon)(y-\epsilon)}{x y} } $$

Now recall that $\log \frac{1}{1-t} = t + \frac{t^2}{2} + \cdots$

Therefore, the matrix
$$( \log \frac{1}{1- \frac{(\lambda_i-\epsilon)(\lambda_j-\epsilon)}{\lambda_i \lambda_j} } )$$ as an infinite sum of positive semidefinite matrices ( recall that a matrix $(\alpha_i \cdot \alpha_j)$ is positive semidefinite).

The other term is zero: $$\sum_{ij}( \log \epsilon - \log \lambda_i - \log\lambda_j) a_i a_j = 0$$

We get $$\sum_{ij} \log \frac{1}{\lambda_i + \lambda_j - \epsilon} a_i a_j \ge 0$$

Taking $\epsilon \to 0$ we get the desired inequality.

We are done.

orangeskid
  • 53,909
  • 1
    [+1] Very very nice use of a whole panel of results and tricks of analysis. At the end, one finds a common integral with the solution given by metamorphy using Frullani's integral, but your proof - though longer - gives deep motivation for arriving at that. – Jean Marie Jun 09 '22 at 20:14
  • @Jean Marie: Much appreciated! – orangeskid Jun 10 '22 at 15:55
1

Just some remarks :

A function is called exponentially convex if :

$$g\left(x\right)=e^{f\left(xa+\left(1-x\right)b\right)}-\left(1-x\right)e^{f\left(b\right)}-xe^{f\left(a\right)}\leq 0,\forall a,b\in I, t\in[0,1]$$

We have the equivalent definition if $f(x)$ is continuous:

$$\sum_{i,j=1}^{n}a_{i}a_{j}f\left(\frac{x_{i}+x_{j}}{2}\right)\geq 0$$

$\forall n\geq 1,\forall a_i\in R,x_i\in I$

Now it's not hard to check that $f(x)=\ln(x)$ is both exponential convex/concave because in the first definition it's equal to zero .

So with the constraint $\sum_{i,j=1}^{n}a_{i}a_{j}=0$ we have a similarity with your inequality .