“Irrational” Coin Flips

Recently I came across this very interesting problem : How can you simulate an unbiased coin flip with a biased coin? What about the other way round?.

Although the problem is not new (to me as well), this time I found myself with a good solution for the same, which I though would be nice to share here. I will try to provide my implementation (in Python3) of the problems and also discuss some theoretical aspects of the more interesting problem : What is the minimum number of biased coin flips required to generate an unbiased flip? I will be referring to the paper [MU08] for this.

Unbiased Flips from biased coins

So, let’s talk about the easy case first. Given an unbiased coin, how do I simulate a biased flip. Slight formalization : Let a coin flip be either 0 or 1, where Pr (1) = p and Pr(0) = q = 1-p for some p \in (0,1). Thus, a flip is said to be unbiased if p = 1/2 and biased, otherwise.

We have two sub-cases here. First, when p is not known to us, and second, when it is. For the first case, we have to find a way around simulating an unbiased flip without making any assumption about p in our algorithm. An important motivation for dealing with such a case is when dealing with sources of randomness whose accuracy is not known. With this respect, I will be briefly discussing the following question : How can we deal with biased coin flips on a bit-precision computer? In other words, p is never really a perfect real number, when working on modern computers, due to the bit precision. We can only make it as small as the smallest floating point number that the computer can represent in its word size. So, how does our solution change in such a situation? Turns out that the solution for unknown p works here as well. Try reasoning this out yourself (or leave a comment otherwise) after reading the algorithm below.

Consider flipping the biased coin twice. Note that among the four possible outcomes, the events 01 and 10 are equally likely. Each occurs with probability pq. Thus, the following algorithm immediately becomes a candidate solution to our problem.

def unbiasedFlipCase1():
  (x,y) = (flip(), flip())
  if x==y:
    return biasedFlipCase1()
  else:
    return x

Here, \texttt{flip()} is a function that returns 1 with probability p and 0, otherwise. Recall that p is unknown to us, and we don’t care about it either as long as we are assured that all calls to \texttt{flip()} are identical and independent of all previous calls. To see why this solution works, it will suffice if we show that Pr(\texttt{unbiasedFlipCase1() = 1}) = 1/2. This can be easily seen through the geometric sum \sum_{k \geq 0} \left( p^2 + q^2 \right)^k pq = \frac{pq}{1 - \left( p^2 + q^2 \right)} = \frac{1}{2}.

Neat, right! So the next time someone flips a coin for you to make some decision and you are not sure if you trust the coin, flip it twice and do as the above algorithm suggests and you can always be sure of an unbiased outcome. One can also see that the expected number of recursive calls before this function returns something is computed as \sum_{k \geq 0}2k \left( p^2 + q^2 \right)^k pq = \frac{p^2 + q^2}{2pq} = O(\frac{1}{p(1-p)}). Can we do better than this? Turns out that if p is unknown, we probably can not.

Let us now turn our focus to the known p case. Wondering why I am discussing this after the seemingly harder case? The reason(s) will be apparent shortly. Note that when we know p, we can take advantage of this knowledge to produce an unbiased bit in fewer expected number of steps than the above algorithm, which also works perfectly fine here. A little math before we proceed with this thought : let us look at the variance of the number of recursive calls of the above function. This is easily calculated to be \frac{p^2 + q^2}{4(pq)^2}. Hence, by using Chebyshev bounds from [AS72], the probability that the number of recursive calls differs by more than k from the mean is at most \frac{p^2 + q^2}{4(pqk)^2} , which decreases quadratically as fast as k. Hence, it is unlikely that this number deviates too much from the mean, implying that the solution is efficient. But, it is the most efficient we can get? The answer is negative. We can do much better than this. Let’s see how.

Recall the concept of entropy and information from any of your undergraduate probability theory classes. The entropy of a biased coin flip is given by H(p) = -p\log p - (1-p)\log (1-p), which denotes the average information gained on one flip of this biased coin. You can now see where I am going with this. [MU08] shows that the most we can get out a biased coin is this information, which directly implies that \left \lceil \frac{1}{H(p)} \right \rceil flips of the biased coin will produce one unbiased flip. (Note that [MU08] was not the first person to prove this result. However, I find the discussion in this report very approachable.)

For p = \frac{1}{2}, we have H(p) attains its maximum value of 1, which gives \left \lceil \frac{1}{H(\frac{1}{2})} \right \rceil = 1. Hence, one coin flip suffices, which justifies well with intuition. As p deviates in either direction, H(p) decreases and hence, the number of biased coin flips increases. Once we have these flip outcomes, a decision is made to return 0 or 1 using the Advanced Multilevel algorithm as in [MU08]. Comparing \left \lceil \frac{1}{H(p)} \right \rceil with \left \lceil \frac{p^2 + q^2}{2pq} \right \rceil = \left \lceil \frac{2p^2 - 2p + 1}{2p(1-p)} \right \rceil, here is the plot I obtained.

figure_1

As you can see, the entropy based solution always produces lesser number of biased coin flips. Here, I have only varied p in increments of \frac{1}{100}, but the graph remains similar for more refined values as well. However, the entropy-based algorithm is quite complicated to understand (at least for me). If you have any interest in coding theory or information theory, help me understand it!

Biased Flips from an unbiased coin

Let us now turn look at the case of simulating a biased coin flip from an unbiased one. On the face of it, this problem seems to be more of a mathematical puzzle than one having any perceivable real word application, but that’s not even remotely true. A ton of applications require us to make decisions which are not always equally favorable to all its possible outcomes. A very simple example being as follows. Suppose we want to simulate the outcome of the sum of a pair of six-sided die. Clearly, all outcomes here are not equally likely. It becomes increasingly impractical to throw two die everytime we want a sample. If only we had a way around!

Unbiased coin flips to the rescue! It turns out that any biased coin flip can be simulated by an appropriate number of unbiased coin flips. The only caveat here is that the unbiased coin flips must be perfectly random, or else we land into the territory of another very interesting question : Given a biased coin with Pr(1) = p_1, what is the minimum number of coin flips required to generate a biased flip with Pr(1) = p_2 for some p_2 \neq p_1? Although I will not spend much time on this question in this post and assume that all unbiased coin flips are available perfectly to me, a brief answer to this question is to consider the maximum amount of information that is obtained about the coin to be simulated from a single flip of the coin used in the simulation. Then, the problem becomes very similar to the technique discussed in [MU08], as described above.

So, how should we pray to the perfection of our unbiased flip in order to be bestowed with a biased Gold coin? Let us discuss an easy-to-understand algorithm to begin with. Recall that if we were instead given a uniform random number generator (RNG), say \texttt{rand()}, which produces a real number in the range 0,1 uniformly at random, we could have easily been able to simulate a biased flip by the following algorithm.

def biasedFlipFromRNG(p):
    u = rand()
    if (u<=p):
        return 1
    else:
        return 0

This works because the probability that a uniformly generated random number is no more than p is exactly p. However, the world is not so generous to bestow us with such a Godly RNG. Lose no hope! There is an easy fix.

We can simulate the algorithm above exactly by treating our sequence of 0‘s and 1‘s, as generated by repeatedly flipping our unbiased coin, as the binary representation of a special number, which I call \tilde{u}. I use the tilde sign to partially indicate that it is just an estimate of the actual u in the algorithm above. With this approach, all we have to do is flip the unbiased coin, say n times (starting with n=1, of course) and record the outcomes in an ordered tuple (b_1, \dots,b_n), and then obtain our estimate at the end of n^{th} iteration as \tilde{u}_n = \sum_{i = 1}^n b_i 2^{-i}. Now comes the trick! Let p_n \leq p be a number whose binary expansion matches that of p up to n bits and then it is all zeros. For example, if p = 0.125 and n = 2, then p_n = 0 since the first two bits in the binary expansion of p are zero. Compare \tilde{u}_n with p_n. If it so happens that \tilde{u}_n = p_n, then repeat this process with n+1. Otherwise, return 1 if \tilde{u}_n < p_n and 0 if not. The rationale behind this approach is that once \tilde{u}_n becomes less than p_n, no further addition of bits will make it larger. Similarly for the case when p_n is smaller. Only when the two are equal, can we make no decision. The algorithm to do the above is summarized in the code below.

def getBinDigit(i,p,q):
   if (q + 2**(-i) <= p):
      return 1
   else:
      return 0

def biasedCoinFlip(p):
   flip = 1
   value = 0
   while(True):
      unbiasedFlip = unbiasedCoinFlip()
      decimalDigit = getBinDigit(flip,p,value)
      if decimalDigit:
         value = value + 2**(-flip)
      if unbiasedFlip < decimalDigit:
         return 1
      if unbiasedFlip > decimalDigit:
         return 0
      flip = flip+1

Here, the function \texttt{unbiasedCoinFlip()} simulates the unbiased coin for us. Note that in the code above, I do not produce the entire tuple of bits for p_n every time. Instead, I generate bits only if I require them. A natural question that arises here is the expected number of iterations of the \texttt{while} loop before a bit is returned by the function. But first, let us convince ourselves that the probability that this function returns 1 is indeed p.

To prove this, let us assume that the binary expansion of p has bits b_1,b_2,\dots. Note that we are only considering the bits after the decimal point, since p < 1. Now, the question becomes the following. Given some n \geq 1 and a sequence of unbiased bits u_1,u_2,\dots,u_n, what is the probability of the event \{ u_i = b_i \quad \forall 1 \leq i \leq n \}? Clearly, this is equal to 2^{-n}, since p is fixed. Hence, with probability 2^{-n}, we enter into yet another iteration of the \texttt{while} loop. Once we are done with these iterations, we return 1 if \sum_{i=1}^{n+1} u_i 2^{-i} < \sum_{i=1}^{n+1} b_i 2^{-i}, which is the same as u_{n+1} < b_{n+1}. This happens with probability 0 if b_{n+1} = 0 and probability \frac{1}{2} otherwise. Thus, the probability that we return 1 after k \geq 0 iterations of the \texttt{while} loop can be written as \frac{2^{-k}b_{k+1}}{2} = b_{k+1}2^{-(k+1)}.

We are almost there! All that remains to be done is to sum the above expression for k \geq 0, which gives \sum_{k \geq 0}b_{k+1}2^{-(k+1)} = p, by definition. Hence, the algorithm is correct. Phew! One last thing before I wrap up. How efficient is this algorithm? We can calculate the expected number of loop iterations before returning 1 using the expression above as \sum_{k \geq 0} kb_{k+1}2^{-(k+1)}. Wait! This seems like a tricky sum. However, we can compute an upper bound for this sum. Writing k = (k+1)-1 and using b_{k+1} \leq 1 for all k \geq 0, we get that the average number of loop iterations before the algorithm returns a bit is no more than 2-p. This may not be tight, but at least it tells us that in fewer than two iterations, the algorithm will return a bit, which is pretty fast if you ask me! A tighter analysis may be non-trivial since I am not aware of any closed form expression for b_{k+1} in terms of k and p. If you do, please let me know!

So people! I hope you enjoyed reading this post as much as I enjoyed writing it. Leave your comments below and I will try my best to address them. I have of course left out a lot of details and other cool algorithms to tackle the problems above, but I want to confess that I understand what I presented the most. I would love to hear about more techniques. Do stay tuned for my next post, in which I will talk about one of my other favorite topics : Dynamic programming. Until then, ciao!

References :

[MU08] Mitzenmacher, Michael. “Tossing a biased coin.” (2008).

[AS72] Abramowitz, M. and Stegun, I. A. (Eds.). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing. New York: Dover, p. 11, 1972.

3 thoughts on ““Irrational” Coin Flips

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s