8

I came up with this when trying to solve a problem of a markov chain with the transition matrix $$P=\begin{bmatrix} 0,0,0,\cdots,1\\ 0,0,\cdots,\frac{1}{2},\frac{1}{2}\\ 0,\cdots,\frac{1}{3},\frac{1}{3},\frac{1}{3}\\ \cdots\cdots\\ \frac{1}{n},\cdots,\frac{1}{n},\frac{1}{n} \end{bmatrix}$$ and it asked me to find $$\lim\limits_{k \rightarrow +\infty}{P^k}\alpha$$ where $\alpha=(0,1,\cdots,n-1)^\top$.

So, I tried to diagonalize $P$ and surprisedly found that it has eigenvalues $1,-\frac{1}{2},\cdots,(-1)^{n-1}\frac{1}{n}$ when $n\leq 7$.So I wonder if this is true for all $n$ from $N^+$ and then how to calculate $$\lim\limits_{k \rightarrow +\infty}{P^k}\alpha$$

Thanks!

Sayan Dutta
  • 8,831
p sms
  • 83
  • 5

2 Answers2

5

$P$ is an irreducible stochastic matrix. Therefore, by Perron-Frobenius theorem, $\lim_{k\to\infty}P^k=\frac{vu^T}{u^Tv}$, where $u$ and $v$ are respectively a left eigenvector and a right eigenvector of $P$ corresponding to the eigenvalue $1$. By inspection, we see that up to scaling, $v=(1,1,\ldots,1)^T$ and $u=(1,2,\ldots,n)^T$. Thus $$ \lim_{k\to\infty}P^k=\frac{2}{n(n+1)}\pmatrix{1&2&\cdots&n\\ 1&2&\cdots&n\\ \vdots&\vdots&&\vdots\\ 1&2&\cdots&n}\tag{1} $$ and $\lim_{k\to\infty}P^k\alpha=\frac{2(n-1)}{3}(1,1,\ldots,1)^T$.

Without using Perron-Frobenius theorem, one may consider the following matrix first: $$ M(t,n)=\pmatrix{ &&&&-(n-1)&t+n\\ &&&-(n-2)&t+(n-1)\\ &&\cdots&\cdots\\ &-2&t+3\\ -1&t+2\\ t+1}\in\mathbb R^{n\times n}. $$ Let $V\in\mathbb R^{n\times n}$ be the upper triangular matrix of ones. Then $V^{-1}$ is the bidiagonal matrix whose main diagonal entries are ones and whose superdiagonal entries are minus ones. By direct calculation, we get \begin{aligned} M(t,n)V &=\pmatrix{ &&&&-(n-1)&t+1\\ &&&-(n-2)&t+1&t+1\\ &&\cdots&\cdots&\cdots&\cdots\\ &-2&t+1&\cdots&\cdots&t+1\\ -1&t+1&\cdots&\cdots&\cdots&t+1\\ t+1&\cdots&\cdots&\cdots&\cdots&t+1},\\ V^{-1}M(t,n)V &=\left(\begin{array}{ccccc|c} &&&n-2&-(t+n)&0\\ &&n-3&-(t+n-1)&0&0\\ &\cdots&\cdots&\cdots&\cdots&\cdots\\ 1&-(t+3)&0&\cdots&\cdots&0\\ -(t+2)&0&\cdots&\cdots&\cdots&0\\ \hline t+1&\cdots&\cdots&\cdots&\cdots&t+1\end{array}\right)\\ &=\pmatrix{-M(t+1,n-1)&0\\ \ast&t+1}. \end{aligned} So, recursively, we have \begin{aligned} \operatorname{spectrum}\left(M(t,n)\right) &=\{t+1\}\cup\operatorname{spectrum}\left(-M(t+1,n-1)\right)\\ &=\{t+1,-(t+2)\}\cup\operatorname{spectrum}\left(M(t+2,n-2)\right)\\ &=\{t+1,-(t+2),t+3\}\cup\operatorname{spectrum}\left(-M(t+3,n-3)\right)\\ &\vdots\\ &=\{t+1,\,-(t+2),\,t+3,\,\ldots,\,(-1)^{n-1}(t+n)\}.\\ \end{aligned} It follows that \begin{aligned} \operatorname{spectrum}(P)=\operatorname{spectrum}\left(M(0,n)^{-1}\right) =\left\{1,\,-\frac12,\,\frac13,\,\ldots,\,\frac{(-1)^{n-1}}{n}\right\}. \end{aligned} Since $1$ is a simple eigenvalue and the moduli of all other eigenvalues are strictly smaller than $1$, we again conclude that $\lim_{k\to\infty}P^k=\frac{vu^T}{u^Tv}$, where $u$ and $v$ are respectively any left and right eigenvectors of $P$ corresponding to the eigenvalue $1$.

user1551
  • 139,064
2

About calculating $\lim_{k \to + \infty} P^k \alpha$:

Notice that your state space is finite (which I'll assume is $1, \dots, n$) and your corresponding Markov chain irreducible (every state can transition to $n$ and from $n$ to any state). So, there exists a unique stationary distribution $\pi$ with $\pi P = \pi$.

By solving explicitly for smaller $n$, one can guess that $\pi = (\frac{1}{m}, \frac{2}{m}, \dots, \frac{n}{m})$ where $m = \sum_{k = 1}^n k = \frac{n (n+1)}{2}$.

Now, also notice that $P$ is aperiodic as for example $p_{n,n} > 0$.

So, by the convergence theorem, $\lim_{k \to + \infty} P^k = \Pi $ where $\Pi$ is the matrix with $\pi$ as columns.

Hence, $\lim_{k \to + \infty} P^k \alpha = \Pi \alpha = c \cdot (1, \dots, 1 )^\top$, where $$c = \pi^\top \alpha = \frac{1}{m} \sum_{k = 1}^n k \cdot (k-1) = \frac{1}{m} \left( \sum_{k = 1}^n k^2 - \sum_{k = 1}^n k \right) = \frac{1}{m} \left(\frac{n (n+1)(2n+1)}{6} - m \right) = \frac{2(n-1)}{3}.$$

MXXZ
  • 1,123
  • 2
    Thank you!I do think this is the most suitable way to solve this problem,but this problem is set before the convergence theorem in my textbook, so I still wonder if it can be solved without that theorem(probably also because the conjecture looks beautiful). – p sms Sep 19 '21 at 11:50
  • Yeah, I understand. Your conjecture is pretty nice, although I am not sure how direct it can be applied here unfortunately. If you have some knowledge in numerical linear algebra (namely power iteration), it should suffice to show that the largest eigenvalue is $1$ the second largest eigenvalue (in absolute value) is strictly smaller. But that's also not very elegant / what you're probably looking for. – MXXZ Sep 19 '21 at 12:16
  • Yes, my aim is to prove that all the eigenvalues of $P$ are in$ (-1,1]$(the conjecture is actually too strong) and then the convergence theorem can be proved in this situation. I admit that this is quite strange, but I can't find a better way to avoid using the convergence theorem directly. – p sms Sep 19 '21 at 13:16
  • This might be useful: https://math.stackexchange.com/questions/40320/proof-that-the-largest-eigenvalue-of-a-stochastic-matrix-is-1 – MXXZ Sep 19 '21 at 14:53
  • But it's still possible that P has an eigenvalue -1,which can make $\lim_\limits{k\to+\infty}P^k$diverge. – p sms Sep 19 '21 at 15:05