0

So, a lot of theory in symmetric random walks seems to concentrate on 'hitting/stopping times' and things like that. So I started wondering...

How would I go about calculating the probability of never returning to a given state ($0$, say), in a random walk (starting at $0$, say) where you move $+1$ with probability $p > 1/2$ and $-1$ otherwise.

And then, could this be generalised to a whole interval, something like $[-10,10]$ for example?

Notes: I read somewhere on here that the probability of escaping zero with a random walk identical to mine is $2 - \frac{1}{p}$.

1 Answers1

2

The abstract general recipe looks like this. Given a domain $D$, two sets $A,B \subset D$ and a Markov process with generator $L$, the probability to hit $A$ before hitting $B$ starting from $x$ solves the following problem

\begin{aligned} (Lu)(x) = 0 & \quad x \not \in A \cup B \\ u(x) = 1, & \quad x \in A \\ u(x) = 0, & \quad x \in B. \end{aligned}

As is this is not the problem that you want to solve, because your problem has no $B$. To get around this, you can solve a sequence of problems where you "move $B$ to infinity". In the case of the symmetric random walk on $\mathbb{Z}$ where you are interested in hitting zero, you can take $A=\{ 0 \},L=P-I$ where $P$ is the transition probability matrix, and $B=\{ n \}$, where $n$ is a large say positive integer. (Since the walk is not symmetric, the positivity assumption changes matters: the situation is different depending on whether the initial jump is to the left or to the right.) The result is a linear system of equations, which in the positive case looks something like

$$u_0 = 1 \\ (1-p)u_0 - u_1 + pu_2 = 0 \\ (1-p)u_1 - u_2 + pu_3 = 0 \\ \dots \\ (1-p)u_{n-2} - u_{n-1} + pu_n = 0 \\ u_n = 0.$$

This can be solved by recurrence relation techniques; for $p \neq 1/2$ the general solution is a combination of exponentials, to which you can adjoin the boundary conditions. In this case when you send $n \to \infty$, you will find that for all $k>0$, $u_k$ converges to zero. On the other hand, if you start on the left and keep the process biased towards moving to the right, then $u_k$ will converge to $1$ instead.

Note that a closely related recipe works for finding the expected time to hit $0$ starting from a point $x$.

Ian
  • 101,645
  • So from what I can gather, you are using the general trick that you would use for hitting times, where you condition on the first step to get a recurrence relation? – user142671 May 29 '15 at 19:54
  • @user142671 In the case of discrete time, it can be viewed that way. But you can handle the discrete and continuous time cases at the same time by operating in the framework I actually wrote down. I find it really beautiful that the result can be written in the same form in both cases. – Ian May 29 '15 at 19:57
  • @Ian: Neat framework. Can you point to a good source explaining this? – Vivek Bagaria Feb 16 '19 at 06:49
  • 1
    @VivekBagaria I unfortunately learned this out of a class that didn't use a book, but the general machinery is called renewal theory. A related formalism that solves the same sorts of equations with different intentions is called transition path theory. – Ian Feb 16 '19 at 14:03