Gambler's Ruin and Random Walk

Suppose that two gamblers, Andrew and Bob, repeatedly place $1 bets on a certain game of chance. In each bet, Andrew wins with probability \(p\), while Bob wins with probability \(q = 1 - p\). The outcome of each round is independent, meaning the probability of a particular result in any round is unaffected by the previous rounds.

Let us also suppose Andrew starts the game rounds with a total of \(i\) dollars, and Bob begins with \(N - i\) dollars, where \(N\) is a fixed integer. The total gambling money between Andrew and Bob remains constant, as every dollar lost by Andrew is gained by Bob, and vice versa. 

This scenario can be visualized as a random walk on the integers between \(0\) and \(N\), where each step represents the outcome of a game. Starting at position \(i\), the random walk moves one step to the right with probability \(p\) or one step to the left with probability \(1-p\). The entire sequence of games ends when either Andrew or Bob is ruined, i.e., when the random walk reaches \(0\) (Bob takes all the money) or \(N\) (Andrew takes all the money). What would be the probability that Andrew walks away with all the money?

In this setting, notice that there is a recursive structure: after the first step, the next round of the game is identical, except that Andrew's current amount of money is now either \(i + 1\) or \(i - 1\). Let \(W\) denote the event that Andrew takes all the money, and let \(p_i\) represent the probability that Andrew wins the game given that he starts with \(i\) dollars. By the law of total probability, conditioning on the outcome of the first round, we have:

\(\begin{aligned}p_i &= P(W | \text{A starts at } i, \text{wins round 1}) \cdot p + P(W | \text{A starts at } i, \text{loses round 1}) \cdot q \\ &= P(W | \text{A starts at } i+1) \cdot p + P(W | \text{A starts at } i-1) \cdot q \\ &= p_{i+1} \cdot p + p_{i-1} \cdot q \end{aligned}\)

Where \(p \neq 0\) and \(q = 1 - p\). This equation holds for all \(i = 1, 2, ...., N-1\), and we also have the boundary conditions \(p_0 = 0\) and \(p_N = 1\)[1].

Now, let's try to solve this. Since there is a recursive structure, we can say that \(p_i = x^i\). Plugging this into the above, we obtain:

\(\begin{aligned}x^i &= p \cdot x^{i+1} + q \cdot x^{i-1}\\ \Rightarrow & \;\; px^2 - x + q = 0 \end{aligned}\) 

Which has roots \(1\) and \(q/p\).

If this is not a fair game, i.e., \(p \neq 1/2\), these two roots are distinct (\(q/p \neq 1\)), and the solution is:

\(p_i = \frac{1 - \left(\frac{q}{p}\right)^i}{1 - \left(\frac{q}{p}\right)^N}\)

On the other hand, if this is a fair game, i.e., \(p = 1/2\), the formula simplifies to:

\(p_i = \frac{i}{N}\)

This means that in a fair game with an equal chance of winning, such as coin flips, the gambler's probability of winning is the relative amount of money that he can bet in the total sequence of games (\(i\) dollars).

Random Walks

A random walk is a mathematical concept that describes a sequence of steps, where each step is determined by random choice. It is a simple yet powerful model used to represent various real-world phenomena, such as stock price movements, diffusion of particles, and decision-making processes. 

The simplest for of random walk is the 1-dimensional random walk, exemplified by the gambler's ruin problem that we've seen earlier. In this scenario, each step involves moving either one step to the left or one step to the right, with the direction chosen randomly. The position after several steps is also a random variable determined by the number of steps and the directions taken.

The definition of a random walk uses the concept of independent random variables. For simplicity, let's consider these independent random variables as outcomes of sequence of random experiments, where the result of one experiment does not influence the outcomes of others. 

Suppose that \(X_0, X_1, ... \) is a sequence of independent and identically distributed random variables. A random walk started at \(z\) is the sequence \((S_n)_{n \geq 0}\) where \(S_0 = z\) and 

\(S_n = S_{n-1} + X_n, \quad n \geq 1\)

Here, the variable \(S_n\) marks the position of the walk at time point \(n\), \((X_n)\) is the random step at the time. For example, in the context of gambler's ruin problem, the random variables \(X_n\) represent the outcome of each round: \(+1\) if Andrew wins the rownd and \(-1\) if he loses. The sequence \(S_n\) then represents Andrew's money at each step \(n\). 

Because there is a recursive structure, we can rewrite the above as:

\(S_n = z + X_1 + \cdots + X_n\)

for each \(n \geq 1\). Note that while the steps \(X_1, X_2, ... \) are independent as random variables, the actual positions of the walk \(S_0, S_1, ... \) are not.

R Simulation

 


[1] Which makes sense because these values represent the two absorbing states in the game. When \(i = N\), it means Andrew has already won the game, so the probability of him winning is certain (i.e., \(P_N = 1\). Similarly, when \(i = 0\), it means Bob has already won the game, and thus Andrew's probability of winning is \(0\) (i.e., \(p_0 = 0\). These boundary conditions reflect the fact that once the game reaches either of these outcomes, it is over, making the probabilities of future wins or losses irrelevant.   

Post a Comment

0 Comments