4

How do we prove that Binomial coefficients are log-concave? A sequence $a_0, \dots, a_n$ is log-concave if $a_k^2 \geq a_{k-1}a_{k+1}$.

$$ \binom{n}{k}^2 \geq \binom{n}{k-1}\binom{n}{k+1} $$

If $ n >> k$ I suppose we get some estimates related to Poisson distribution : $\binom{n}{k} \approx \frac{n^k}{k!} $

$$ \left(\frac{n^k}{k!}\right)^2 \geq \frac{n^{k-1}}{(k-1)!} \frac{n^{k+1}}{(k+1)!} \hspace{0.25in}\text{ or }\hspace{0.25in} k!^2 \leq (k-1)!(k+1)!$$

In my homework, I have a product of 3 binomial coefficients and I want to know if I can replace them with the average:

$$ \binom{n}{\frac{a+b+c}{3}}^3 \geq \binom{n}{a}\binom{n}{b}\binom{n}{c} $$

cactus314
  • 24,438
  • 5
    In your initial inequality divide the left hand side by the right hand side and simplify. – WimC Jun 15 '14 at 12:06

7 Answers7

11

Since $$ \binom nk = \left((n+1)\int_0^1 t^k (1-t)^{n-k}\,dt\right)^{-1} $$ (see beta function, and/or this question), the desired inequality is equivalent to $$ \left(\int_0^1 t^k (1-t)^{n-k} \,dt\right)^2 \le \left(\int_0^1 t^{k-1} (1-t)^{n-k+1} \,dt\right) \left(\int_0^1 t^{k+1} (1-t)^{n-k-1} \,dt\right) $$ which is an instance of Hölder's inequality.

3

Hint: Replace them with their factorial expressions, $\displaystyle{a\choose b}=\frac{a!}{b!(a-b)!}$ , and then use the basic

properties of the factorial function, such as $m!=m(m-1)!$

Lucian
  • 48,334
  • 2
  • 83
  • 154
3

First part: Here is an argument with a slight combinatorial flavor: Rewrite the inequality as $$\frac{\binom{n}{k}}{\binom{n}{k-1}}\ge \frac{\binom{n}{k+1}}{\binom{n}{k}}\qquad \dots(A)$$ LHS is the ratio of number of $k$ sized subsets to the number of $k-1$ sized subsets. We ask 'in how many ways can we extend a $k-1$ sized subset of $[n]:=\{1,2,\dots ,n\}$ into a $k$ sized subset'? This can be done by including one more element which is not the part of the $k-1$ sized set, that is, in $n-(k-1)$ ways. Each $k-1$ sized set gives rise to $n-k+1$ subsets of size $k$. But, each subset of size $k$ is formed by extending $k$ different $k-1$ subsets. Hence the ratio of number of $k$ sized subsets to the number of $k-1$ sized subsets is $\frac{n-k+1}{k}$. Similar argument for RHS gives $\frac{n-k}{k+1}$

Now, $$\frac{n-k+1}{k}\ge \frac{n-k}{k+1} \qquad \dots (B)$$ is obvious. Note that we can obtain (B) from (A) by simplifying using the usual definition of $\binom{n}{r}$.

talegari
  • 1,033
2

Here's an injection argument which fixes the error made by @Michael. Consider a pair $(S,T)\in{[n]\choose k-1}\times{[n]\choose k+1}$. Define $S_i:=S\cap[i]$ and similarly define $T_i$. Now, consider the sequences $S_0,S_1,\dots,S_n$ and $T_0,T_1,\dots,T_n$. Since $|S_0|=|T_0|=0$, $|S_n|=k-1=|T_n|-2$, and each set increases by at most one in size at each step, there must be an index $I$ for which $|S_I|=|T_I|-1$ (if there are multiple, choose the smallest). Define $S'=T_I\cup(S\setminus S_I)$, $T'=S_I\cup(T\setminus T_I)$ and $f(S,T)=(S',T')$. Firstly, it is obvious that $|S'|=|T'|=k$ since, for example, $T_I\subseteq[I]$ while $S\setminus S_I\subseteq[I+1,n]$, so $|S'|=|T_I|+(|S|-|S_I|)=|S_I|+1+(k-1)-|S_I|=k$ (similarly for $T'$). Therefore, $f$ is, in fact, a map from ${[n]\choose k-1}\times{[n]\choose k+1}\to{[n]\choose k}^2$. The fact that $f$ is an injection follows quickly from the fact that you can recover the index $I$ and, from that, reconstruct $S$ and $T$.

munchhausen
  • 1,248
1

Following up Lucian's answer, start with the definition. We want to prove that

$$ A=\frac{{n\choose k}^2}{{n\choose k-1}{n\choose k+1}} \geq 1. $$

$$ A=\frac{(n!)^2}{((n-k)!)^2(k!)^2}\times\frac{(k-1)!(n-k+1)!}{n!} \times\frac{(k+1)!(n-k-1)!}{n!} $$

$$ =\frac{n-k+1}{n-k}\times\frac{k+1}{k}\geq 1, $$

which holds for $n\geq k+1$ and $k\geq 1$.

eymen
  • 58
1

A simple counting argument:

The left-hand side is the number of ways of choosing two $k$ subsets of the integers $\{1,\dots, n\}$ independently.

The right-hand side is the number of ways of choosing a $k-1$ subset and a $k+1$ subset independently. Such a choice can be converted into two $k$ subsets since the $k+1$ subset has at least two elements that are not members of the $k-1$ subset. If we choose to switch the least element of the $k+1$ subset that is not a member of the $k$ subset. This conversion, as a mapping, is one-to-one: the inverse conversion is to take, from such an ordered pair of $k$ subsets, in which the second set has at least one element not in the first set (and therefore vice versa), the least element of the first $k$ subset that is not a member of the second $k$ subset and put it (back) into the second.

This defines an injection of the product of $k-1$ subsets and $k+1$ subsets into the product of the $k$ subsets with itself. Hence inequality may be deduced.

Michael E2
  • 1,569
0

Here is a very nice injection. I got this from the pre-print "Unimodality and the reflection principle," by Bruce Sagan. https://arxiv.org/abs/math/9712215

Given $n\in \mathbb N$, define a lattice walk of length $n$ to be a sequence of points $(x_0,y_0),(x_1,y_1),\dots,(x_n,y_n)\in \mathbb Z\times \mathbb Z$, such that for each $k\in \{1,\dots,n\}$, we have $$ (x_k,y_k)-(x_{k-1},y_{k-1})\in \{(0,+1),(0,-1),(+1,0),(-1,0)\} $$ That is, a lattice walk is a sequence of points in the plane traced out by taking $n$ steps, where each step is one unit north, south, east, or west.

Lemma: Let $n\in \mathbb N$, and let $x,y\in \mathbb Z$. As long as $n+x+y$ is even, the number of lattice walks from $(0,0)$ to $(x,y)$ with length $n$ is $$ \binom{n}{\frac12(n+x+y)}\times \binom n{\frac12(n+x-y)} $$ Proof: See https://math.stackexchange.com/a/4058039/.

Using this Lemma, we see that $\binom nk^2$ is the number of lattice walks of length $n$ from $(0,0)$ to $(2k-n,0)$, while $\binom n{k-1}\binom n{k+1}$ is the number of lattice walks of length $n$ from $(0,0)$ to $(2k-n,-2)$. Therefore, we just need an injection defined on these lattice walks.

Given a lattice walk $L$ from $(0,0)$ to $(2k-n,-2)$ of length $n$, let $\{(x_k,y_k)\}_{k=0}^n$ be the sequence of points that $L$ visits. Since $L$ starts at the origin, and $L$ ends at $(0,-2)$, at some point, $L$ must cross the line $y=-1$. Therefore, we can define $j$ to be the smallest index such that $y_j=-1$. We then define $L^\text{refl}$, a reflected version of $L$, as follows.

The first $j$ steps of $L^\text{refl}$ are the same as the first $j$ steps of $L$. This means both paths will visit $(x_j,y_j)$. However, for the remaining $n-j$ steps, every step of $L^\text{refl}$ is a reflection through the $x$-axis of the corresponding step in $L$. That is, after point $(x_j,y_j)$, $L^\text{refl}$ goes down when $L$ goes up, and $L^\text{refl}$ goes up when $L$ goes down. If $L$ goes left or right after point $(x_j,y_j)$, then $L^\text{refl}$ goes the same direction as $L$. The result is that $L^\text{refl}$ will be a new lattice walk with $n$ steps, but while $L$ ended at $(2k-n,-2)$, $L^\text{refl}$ ends at $(2k-n,0)$.

You can prove that the correspondence $L\mapsto L^\text{refl}$ is injective, giving a combinatorial proof of the log-concavity of the binomial coefficients.

Mike Earnest
  • 75,930