1

This is a slightly more advanced example of this question, but instead of having 6 tails or heads resulting from six consecutive coin throws we want to hit all the different 4 sides of D4 tetrahedron dice. We can assume each side has equal $p=1/4$ probability of coming up and that result of consecutive throws are independent.

How many times do we need to throw the D4 to say with 50% probability that all 4 sides has come up in sequence after each other (in any order) at least once?

I am interested in both theoretical approaches and practical methods.


Examples of allowed orders:

  • 1,1,2,1,2,3,4,3,2,...
  • 1,2,1,4,3,2,1,3,3,...
  • 4,3,3,3,2,4,1,2,1,...
mathreadler
  • 25,824

2 Answers2

2

The system is in state $S_k$, $1\leq k\leq4$, if there are $k$ useful last digits. After the first throw we are with probability $1$ in state $S_1$. State $S_4$ is the terminal state. The Markov matrix for this system is $$A=\left[\matrix{{1\over4}&{1\over4}&{1\over4}&0\cr {3\over4}&{1\over4}&{1\over4}&0\cr 0&{1\over2}&{1\over4}&0\cr 0&0&{1\over4}&1\cr}\right]\ .$$ This should be read as follows: When we are in state $S_1$, with probability ${1\over4}$ the next throw leaves us in $S_1$, and with probability ${3\over4}$ the next throw brings us to $S_2$. When we are in state $S_2$, with probability ${1\over4}$ the next throw brings us back to $S_1$, with probability ${1\over4}$ the next throw leaves us in $S_2$, and with probability ${1\over2}$ the next throw brings us to $S_3$. Etcetera.

We now have to compute the sequence $$x^{(n)}:=A^{n-1}\left[\matrix{1\cr0\cr0\cr0\cr}\right]\qquad(n\geq1)\ ,$$ and find the first $n$ for which $x^{(n)}_4\geq50\%$. The computation shows that this is the case for $n=12$.

  • Yes this is very much what I was about to try. Nice. Exciting to see that something seeming so complicated can be solved with a rather small matrix. – mathreadler Apr 14 '18 at 12:56
1

We can do as in your answer in the linked thread.

Consider three consecutive throws. If they are all distinct, the probability that the next throw is a win is 1/4. Otherwise, we get another sequence of three consecutive throws, and we can try again.

At first, I'll consider the Markov chain whose state space is the set of sequences of three consecutive throws, plus a dummy state $\partial$ (corresponding to a win), and transitions correspond to deleting the first throw and adding a third throw (or winning).

For instance, $(1,2,3)$ can transition to $(2,3,1)$, $(2,3,2)$, $(2,3,3)$ or $\partial$, with probability $1/4$ each.

Now, let's do better. Since the specific throws do not matter ($(1,2,3)$ is as good as $(2,3,1)$), what matters is the number of distinct throws, and their order. We can thus factorize the Markov chain. Let's reduce the state space to:

$$\Sigma = \{(a,a,a), (a,a,b), (a,b,a), (b,a,a),(a,b,c), \partial\},$$

where for instance $(a,a,b)$ corresponds to $(1,1,2)$, $(1,1,3)$, $(1,1,4)$, $(2,1,1)$, etc. The transition matrix for the corresponding Markov chain is:

$$A := \left( \begin{array}{cccccc} 1/4 & 3/4 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1/4 & 1/4 & 1/2 & 0 \\ 0 & 0 & 1/4 & 1/4 & 1/2 & 0 \\ 1/4 & 3/4 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1/4 & 1/4 & 1/4 & 1/4 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right)$$

Note that $\partial$ is an absorbing state: once we have won, we stay at the state "won".

Finally, let us begin the game. After the first three throws, we get a random state in $\Sigma$. Its probability distribution is $u := (1, 3, 3, 3, 6, 0)/16$. Hence, the probability distribution at time $n$ is:

$$uA^{n-3},$$

and the probability that we have won at time $\leq n$ is the probability that we land in the absorbing state $\partial$, that is,

$$(uA^{n-3})_6.$$

Now, you can just input everything into your favorite software, let it compute $(uA^{n-3})_6$ explicitely or approximately, and check when it is at least $1/2$.

D. Thomine
  • 10,870
  • Wow that was fast, I won't have time to check it or test my version until today evening or so. Thanks for the contribution. – mathreadler Apr 14 '18 at 11:17