It has been some time since I posted here. I was struggling to find an interesting topic to write about, and what more interesting topic can one find other than blockchains! It has already conquered the news, the business world, the research community and very soon, who knows it may even be the new

So, as part of my internship with Visa Research, I get to meet a lot of interesting people who are doing some cutting-edge work in the field of cryptocurrencies and using blockchains for a multitude of applications. During one of these conversations, I got curious about how people are trying to apply the blockchain technology to the field of medical science. However, most of the links I could find online were concerned with trying to secure medical data and make them globally available, while preserving the privacy of the people that helped in the collection of that data. This one particular web page does a nice job of listing different ways in which the healthcare industry is benefitting or can benefit from the blockchain technology. Some of these applications include contract verification between health plan providers and patients, aggregation and availability of patient databases to a large population, trust management between record keepers and record users, real time decentralized movement and update of records, among many other ways in which the technology can help bring a unified and secure view of health care across the globe.

Although these applications do seem to be some obvious beneficiaries of the technology, I was surprised to find almost no content on the web about the use of blockchains for providing global medical services. One of the key things I learned recently since starting to look more deeply into blockchains is that one can treat them as bulletin boards, where information once recorded stays forever and any addition of new information must be agreed upon for validity by the people *using *the blockchain. With this view, imagine a global clinic where ** Dr. Block**, a blockchain disguised as a doctor, treats her patients all around the world. She learns how to perform diagnosis through the experts in the industry and through treating her patients over time. Every time she treats someone or some

Let us say that Dr. Block is an expert in the diagnosis of a disease D. Initially, some highly experienced doctor, say Dr. Expert_1, somewhere in the world, who works in a top hospital or a medical research center trains a model (with the help of his team or students) that helps him/her classify patients who have disease D or not based on the symptoms. This model can be trained on patient data in a way that preserves the privacy of these patients. For example, if this model is based on a neural network, once the training is complete, the weights of the network reveal no information about the data that was used to train the network. This doctor is altruistic by nature, so when he observes that his model provides reasonably good accuracy in the disease diagnosis, he wants to help people all around the world to use his model to determine whether they have D or not. He decides to contact Dr. Block about this.

Dr. Block suggests Dr. Expert_1 to upload his trained model (or some form of it’s hash) on the blockchain. (Let’s not deviate ourselves by focussing too much on efficiency here, but there are many other ways of making this model available on the blockchain in a way that makes it easy for people to verify and access it later. ) Since Dr. Expert_1 is still human and wants some credit for his work, he charges some money from anyone who wants to use this model to diagnose his patients, and reward people who contribute to the model. For this purpose, he also devises a cryptocurrency, say DOCT, and accepts/provides payments in this currency. So once this model (or it’s hash) is available on the blockchain for people to use, anyone in the world now can use it in exchange of some DOCTs and get rewarded in DOCTs for contributing to the model.

Say I am Dr. Expert_2 and I reside in a remote village in India where health care is beyond the reach of people who live in my village. I come to know of Dr. Block that he is very accurate in the diagnosis of disease D and I want to use his expertise in helping my patients here. I apply for access to Dr. Block by paying the required DOCTs (over the same blockchain) and download the trained model. This payment is done in a similar manner to that of cryptocurrencies today. Once I download this model, I use it on my patients and help them take appropriate measures. For some extra DOCTs, I use my technical expertise (which I gained as my hobby) to provide this diagnosis as a service through phones and self-service booths around my village. Since no network connectivity is required to make use of this model, I can do it in the remotest part of my village.

Over time, I discover that some patients were diagnosed correctly while some others were not. I am sad about the false negatives, but I am happy I was able to take preventive measures before things went out of hand for them. However, I do not want future patients to be identified falsely for the disease D. So, I train this model further using my patients’ data and results (incremental training) and suggest the improved model to Dr. Block. The typical consensus algorithm that allows for blocks to be added to the blockchain now contains experienced doctors from around the world who will now check if the new model I am proposing provides better accuracy against their test dataset. If a consensus is reached in agreement with my model, Dr. Block accepts my new model and Dr. Expert_1 pays me my reward in the form of some DOCTs. I can now use these DOCTs later to upgrade my model later, if necessary. However, if the experts feel that the new model is not good, they can not include it. This way, it is almost like all the doctors in the world have joined hands together in curing the disease D.

Of course, the discussion above is very high level and in some sense, assumes altruism and good will of the people. The typical questions of adversarial environments and malicious tampering that arise with modern cryptocurrencies also map to this setting, but we leave that for people who actually get interested in this idea and want to implement it. In a nutshell, I wanted to convey my thoughts about how blockchains are a little more powerful in their capabilities than what is currently thought of them and one should really think of them as a global service provider that is secure and tamper evident and by their very design and intention, provide a payment model for the services as well.

Hope you liked this post and will give it a thought. I am very open to actually bringing this idea to life so if any of you want to join hands with me, let’s collaborate and try to bring the best of technology and health care in making this world a safer and a healthier place! Until then, ciao!

Featured image source.

]]>

This post is about an experiment I performed during Fall 2015 semester under the supervision of Prof. Stephanie Forrest as part of her course on complex adaptive systems here at UNM. The aim was to evolve a population of *seemingly* random binary sequences into sequences that are provably random with respect to a given set of tests of randomness and independence. My motivation for this experiment was to explore the possibility of using a genetic algorithm as a wrapper on the RNG (random number generator) code. This may be useful in situations when we require the user to not be able to regenerate the sequence(s) given initial RNG parameters.

For the experiment, the initial population of sequences was generated using the Mersenne twister inside Python 2.7.10’s random.randint(0,1) function and then evolved using a genetic algorithm (GA). The idea was to weigh these sequences with respect to the p-values of five statistical tests of randomness and independence performed on each of them and apply some selection, crossover, and mutation to evolve this population into one that has a majority of high fitness individuals.

The sequences in the initial population were generated using different seeds. This defined the original aim of the experiment: *to evolve to a set of sequences that are *random enough* and show negligible dependence on the seed used for the random number generator (RNG) that created them.* Thus, the GA was halted as soon as the population evolved to contain at least some given fraction of high fitness individuals.

An interesting observation was made when the results were obtained. The algorithm showed a strong aversion to mutation (when one would expect that mutation would actually help). Even when setting the mutation probability to something as low as , the population did not *seem to* converge to contain high fitness individuals up to the fraction we desired. This suggested that the space of all *random* sequences (as generated by this RNG) contained very small (maybe even singleton sets of) neutral regions with respect to the fitness function used and that there was perhaps a high correlation between the bits in the sequence and their position in it. The plot below shows the results obtained, where the X-axis represents number of generations and the Y-axis represents fitness.

The plot below shows the results obtained, where the X-axis represents number of generations and the Y-axis represents fitness. Mutation probability was set to per bit and the crossover probability was kept very close to . As can be seen, the maximum fitness in the population decreases with successive generations.

Even more interesting is the fact that a high probability single point crossover operation supported evolution in our favor, and produced a population of distinct sequences having high fitness values. So, if it was indeed the case that the neutral regions are small, one would expect the crossover to not do so well either. So, to verify this, I ran some simulations with low crossover and mutation rates and observed that the population hardly evolved. This behavior has made me sleepless since.

Some questions and extensions I am considering for this project are as follows:

- Can -values be replaced by some other measure of goodness on the tests of randomness/independence?
- Does the order of applying the tests matter? In other words, given a sequence of tests , when does there exist (or not) a sequence such that is
**not**random with respect to the set for all , but is random with respect to the whole set ? - How about another design of the fitness function, other than just the product of the values?
- Does the nature of crossover matter in this setting?
- Is there an analytical explanation to the small sized neutral regions?
- Define a measure for goodness of evolution and prepare plots for this goodness against crossover rates and mutation rates.

I plan to update this post with the actual algorithm I used and plots for the results, along with a more analytical explanation of the situation and possibly the questions above, so that some of you can suggest edits and help me solve the mystery. Stay tuned

**References :**

]]>

On a more serious note, this post is an interesting problem from Peter Winkler’s book on mathematical puzzles [PW], brought to my attention by my advisor Jared during our weekly group meeting two weeks back. It tends to highlight the power of majority voting schemes in driving all the economy in the hands of only one person. Although the practical scenario may be largely different, the puzzle below does demonstrate potential scheme, which Jared referred to as a *bug *(and I agree), that exploits democracy in the favor of the ruler. Hope you find it an interesting read!

Consider a democracy with people, one of whom is the leader (elected or otherwise). At the time of his election, each of the people have some initial amount of money they possess. The aim of the puzzle is for the leader to propose a series of money redistributions among the people so that at the end of the process, he is able to collect as much as money from the people as possible. The only caveat here is that for whatever change he proposes, he needs a majority of the people to vote in favor of the proposal for it to pass. We assume that every person (except the leader) votes YES to any scheme that increases the money he currently possesses, votes NO to any scheme that proposes a smaller amount that what he currently owns and ABSTAINS otherwise. However, remember that since the leader is one of the people to vote, he has a say in the voting as well.

With respect to the discussion we had in our group meeting and acknowledging all other members who contributed towards coming up with this solution, here is what we think the leader can do best.

**Theorem: ***For a democracy with people that starts with a total of units of money distributed among the people, a pseudo-greedy leader is always able to gather units of the money to himself in rounds of voting.*

Let us first look at an example before we try to attempt proving the theorem above. Suppose there are people in the democracy, each starting with a dollar. For the sake of representation, let this fact be represented by {A:1, B:1, C:1, D:1, E:1}, where without loss of generality, we assume that A represents the leader.

To begin, the leader suggests the following redistribution of money: {A:1, B:2, C:2, D:0, E:0}. When the voting begins on this scheme, D and E vote NO but B and C vote YES. Then, it is up to the leader to break the tie and he does this by voting YES since he foresees this scheme to help him in the future. The scheme is then passed and the money has now being redistributed as proposed. The next scheme that the leader proposes is {A:1,B:4,C:0,D:0,E:0}, on which B votes YES, C votes NO and D and E ABSTAIN. Again, to break the tie, the leader votes YES and the scheme passes with this redistribution of the money.

In the last step, the leader proposes {A:4, B:0, C:1, D:0, E:0}. For this scheme, B votes NO, C votes YES and the others ABSTAIN. Hence, the leader votes YES and the scheme passes with the majority vote. Note that the leader has been able to grab $4 out of the total of $5 in the population in just 3 steps, by being non-greedy sometimes. Also, note that the remaining $1 cannot go to the leader since he will never have a majority to vote in favor of any proposition that does this. Thus, the leader ends with taking over $4 in just 3 steps, taking full advantage of the power of majority voting scheme. In fact, since the first two schemes he proposed did not increase his dollars, he must have come out as being generous to some people in the process, building confidence in his leadership which he would exploit later. Hence, the term *pseudo-greedy* in the theorem statement. Sounds like a serious bug to me!

Now that we have seen an example of how the leader can drive all but one dollar to himself, the theorem statement above can be easily proved. For the sake of brevity, I will not present a formal proof of the same, but rather give an informal idea on how one can devise a set of schemes which end in a distribution in favor of the leader.

The main idea is for the leader to always keep the decision of voting YES on the schemes he wants to pass with himself. The leader can do this by enforcing fewer than a half of the people to vote NO (since this is unavoidable) and by keeping at least these many people to vote YES (to obtain a tie which he will break in his favor). The latter requires him to be nongreedy sometimes, bu that’s OK since he knows in the long run, this will benefit him. Typical political hypocrisy!

Thus, as demonstrated above, the leader takes money from fewer than a half of the people (as may as he can) and transferring that money to the other half which will vote YES on the redistribution. This way, once all the money (except the dollars with the leader) reach one person (who is not the leader), then the trick is to give one dollar to a person who doesn’t have any money and put the remaining money with the leader. This scheme will always pass because the poor person who got this one dollar will vote YES to the scheme and the majority will be achieved when the leader votes. It’s always easy to lure the most suffering men into voting YES for anything that even remotely seems favorable to them.

Hence, in steps, the leader has collected all but one dollar from the people in the democracy. An important point to note here is that the puzzle doesn’t require any person to know the full distribution of money he is voting for in any step. In other words, as long as a person sees that the leader has proposed something in which his balance will increase, he votes YES regardless of what others get or lose. This is again very representative of real life in which hardly anyone looks at the full budget proposed by the government and checks complete money flow before voting for the same. Ignorance may be a bliss for some, but is probably driving our money away from us in this scenario.

Well, I hope this puzzle was an interesting mathematical insight into what can go (or is going) wrong with the democracies all around. A simple majority voting scheme can be adversarially designed to capture all the money or resources and numerous such algorithms may exist in place already. I would also like to say that this post is not pointed to anyone in particular, and just presents a mathematical puzzle from a curious point of view.

Moral of the story : DO NOT ABSTAIN. Practice your voting rights even if the leaders propose something that may not favor or harm you. As can be seen, if the people in the democracy above had not abstained, the leader would never be able to gather all the money to himself.

Until next time, have fun and stay tuned!

**References : **

[PW] Winkler, Peter. “Mathematical Puzzles: A Connoisseur’s Collection. 2004.” *AK Peters*.

]]>

This week’s post is about an interesting relation between counting and sampling, motivated from an enlightening paper [MD02] by Prof. Martin Dyer. More specifically, the paper introduces dynamic programming as a technique to aid in approximate counting and uniform sampling using a *dart throwing* approach, however, this blog post is only about the example presented in the beginning of the paper where Prof. Dyer uses the count of the solutions to the 0-1 knapsack problem to sample from the set of these solutions uniformly at random in expected polynomial time (in the number of variables). You are encouraged to read [WK16] to learn more about the knapsack problem. I hope you like this post

So, the problem at hand is to be able to produce a solution to a given 0-1 knapsack problem which has been sampled uniformly at random from the set of all such solutions. Why would one require to do so? Well, among the many applications of the ability to sample uniformly at random from combinatorial sets, the one I encounter the most is in computer simulations and producing approximations. It is well known that the general knapsack problem is hard to solve, so counting the number of solutions will only be harder, let alone the problem of sampling one of these solutions. Hence, if we can approximate the count using some efficient technique, (almost) uniform sampling will become readily available. How?

Well, to answer this question, let’s first formalize the problem a little bit. The 0-1 knapsack problem can be written as the inequality for for all . Here, we assume that are all integers. Note that any single linear inequality in 0-1 variables with rational coefficients can be written in this form [MD02, WL75]. Denote by the set of all solutions to this inequation. Then, the problem at hand is to sample uniformly at random from . So, back to the question. What if we don’t know ?

One way of sampling from without any knowledge of its size is to exploit the fact that we can efficiently check if a given assignment of 0’s and 1’s to our variables satisfies the inequality above, i.e. whether a given 0-1 assignment is in or not. This is crucial for obvious reasons, most important of which is that we will rely on this capability to accept only those assignments that lie in and reject others in a technique popularly known as *rejection sampling*. Start with randomly producing a 0-1 assignment of the variables and accept it if it satisfies the inequality and repeat otherwise. Simple, but not always efficient. Why?

Suppose is very small, say . Then, the probability of acceptance in one iteration when we have variables is , which is . Hence, in expectation, exponentially many rejections will happen before the first assignment is sampled, making the algorithm extremely slow. Can we do better? Well, let’s try Prof. Dyer’s approach.

The main idea behind the technique I am now going to talk about is eliminating the rejections above and directly sampling a solution to the problem. Since each solution must be sampled with probability , it will be good to know . Let us say we do. Now, fix a variable, say . What is the probability that in a satisfying assignment to the problem? Well, since we are sampling uniformly at random from , this probability is equal to the ratio of the number of solutions in which to . Hence, it will be good to know the number of solutions in which after which the problem becomes trivial. So, how to compute this number?

[Enter dynamic programming.]

Let be the number of solutions to inequality , where ‘s and ‘s are the same as above. Clearly, we can write if and 2 otherwise. Also, observe that . To recursively compute , note that if we set , then we have the inequality , whose number of solutions is . However, if we set , then we have the inequality , whose number of solutions is . Thus, we can write and solve the dynamic program recursively in time.

We are not quite done yet. We still do not know the number of solutions in which . At least not directly. However, this is where the genius in [MD02] shines. We sample an assignment from as follows. With probability , set and with probability , set . Once we have this done, use the dynamic program above to recursively set the values of the other variables until we are done. The claim here is that the resulting assignment is a valid solution to the inequality and is sampled uniformly at random from . How? Here is why.

To see why the resulting assignment is in is easy. Once we assign the value to a variable, the dynamic program lets us sample the value of the next variable (or the previous one, whichever way you see it) based on the number of solutions in which the next (previous) variable is assigned a particular value. In other words, say we just assigned the value to . Then, if is not a valid solution, the dynamic program will take care of it by itself.

To see why the resulting assignment is uniformly random, let us compute the probability that a solution is produced for some such that . We can write . Since the assignment to is independent of other variables, we can write this as . Now, we can expand similarly as . Again, the assignment of is independent of all others and hence, . Keep going on like this to obtain that the probability , which is what we wanted to prove.

Hence, we just saw a technique using counting and dynamic programming that allows us to sample exactly from a set of objects which would be difficult to sample from in general. This technique can be extended to many other combinatorial problems, however, the dynamic program to count may not be straight forward. However, it should now be clear that if exact counts are replaced by approximate counts (through some technique), then uniform sampling becomes almost uniform sampling. An important application of this is approximate counting of the number of perfect matchings in a given graph and then sampling (almost) uniformly at random one of these matchings.

The reverse direction is, however, pretty straight forward. If we knew how to sample, we can count easily. However, an exact count may not be possible using this approach, but we can count to a precision that is arbitrarily close to the exact count by using a technique similar to rejection sampling.

I hope this post was helpful in understanding this fascinating connection between sampling and counting of combinatorial objects. It is a huge area of research and lots of interesting applications have been explored. I am trying to learn more in this area and hope to come up with more fascinating examples in the future posts. Until then, ciao!

**References**

[MD02] Dyer, Martin. “Approximate counting by dynamic programming.” In *Proceedings of the thirty-fifth annual ACM symposium on Theory of computing*, pp. 693-699. ACM, 2003.

[WK16] https://en.wikipedia.org/wiki/Knapsack_problem

[WL75] Wolsey, Laurence A. “Faces for a linear inequality in 0–1 variables.” *Mathematical Programming* 8.1 (1975): 165-178.

]]>

Today, I will be talking about a classic puzzle that uses linear algebra to attack a combinatorics problem. I first learned of this puzzle during our research group seminar here at UNM, where Jared presented this problem to us as an exercise. Later on, I read through a couple of formalizations for the same, and will now be presenting my take on the problem taking helpful references from [MIT307], [UC726] and [UC07]. My interest in this problem originates from the fact that upon first hearing the problem statement, it doesn’t strike to be something that can be made easier using linear algebra, however, the approach baffles my mind every time I think about it. Let’s dive in.

The problem, which is referred to as Elwyn Berlekamp Theorem in [UC726], is as follows: *In Oddtown**, there are** citizens and clubs satisfying the rules that each club has an odd number of members and each pair of clubs shares an even number of members. We then need to show that , i.e. the number of clubs cannot be more than the number of citizens. *

Before proceeding to the proof, I must point out that this problem has a close connection to Fisher’s inequality [MIT307], which is similar to a slight modification of the original problem. Apart from this, it is related to the study of designs, set systems with special intersection patterns. [MIT307] shows how such a system can be used to construct a graph on vertices, which does not have any clique or independent set of size . I will briefly talk about these results towards the end of this post.

Let us now focus on proving that the number of clubs in Oddtown cannot exceed the number of citizens. Formally, let be the set of clubs in Oddtown and be the set of citizens (people). We start by paying attention to the fact the no two clubs are allowed to share an odd number of members. The entire problem is full of even-odd type of constraints, which suggests that we should attack this problem using some form of parity checking. The easiest such setting is to work in the field with characteristic 2. In other words, we will perform all addition and multiplication operations modulo 2 from now on.

Within this setting, for each club , define its *membership vector* , where if citizen is a member of club , and 0 otherwise. Intuitively, this is similar to each club maintaining a ledger in which all the citizen names are listed and only those names are marked that correspond to member citizens for that club. These ledgers must satisfy the property that for any pair of clubs, if we compare their ledgers, the number of common members must be even (which allows no common members as well). Mathematically, we can represent this constraint using dot product of the membership vectors of the clubs.

Notice that if the number of citizens common to any given pair of clubs is even, then in our field, the dot product of the membership vectors of these clubs will be zero. Hence, in the magical world of modulo 2, all clubs have membership vectors that are orthogonal to each other. More importantly, note that none of these vectors is identically zero since the number of members in each club must be odd. Hence, the puzzle now reduces to asking the maximum number of pairwise independent vectors that exist in . This is where our friend, *linear algebra*, kicks in.

Form a matrix whose columns are the membership vectors of the clubs. Note that since all columns are pairwise independent and none of them are identically zero, the rank of this matrix is exactly . However, we know that the rank of any matrix must never exceed the number of rows or the number of columns. Voila! We’re done. The number of rows in this matrix is , which immediately proves that . Try holding your right ear using your left hand while swirling your arm around the back of your head. I felt exactly like that my first time with this puzzle!

Another way to see this without forming the matrix is by noticing that the number of pairwise independent vectors in cannot be more than the dimension of , which is . Hence, the same result as above. No matter how you prove it, we always obtain the condition we wanted to prove. See! that’s why we must befriend linear algebra.

A small technicality. Can we have exactly clubs, if not more? The answer is, trivially, yes. Since the number of common members to any pair of clubs must be even, set it to zero! Have a dedicated club for every man and you’re done. If you call this cheating, think about this. How cool will be it to have your own clubhouse?!

**Maximality**

Given this construction above, we can prove some more exciting results about the fate of people in Oddtown. Specifically, we can prove that for every , we can always save the Mayor’s money by building no more than two clubs that satisfy all the constraints. Furthermore, we can prove that this system is maximal, in the sense that no more clubs can be added without violating at least one of the conditions. Let’s see how.

So the task at hand is to divide the citizens into 2 clubs, say and , such that (1) the number of citizens common to both the clubs is even, (2) the number of citizens in each club is odd and 3) adding even one more club will violate (1) and (2). If is even, then one way to divide the citizens is to allocate one citizen to the first club and the remaining citizens to the second club. This satisfies (1) and (2). To see if this satisfies (3), add another club, say to the town. Since this club must have even number of common members with the other two clubs, the number of common members between and must be zero. This requires all the members in to also belong to , which immediately gives a contradiction since (2) will require this number of common members to be even while (1) requires them to be odd. Hence, this distribution of citizens into two clubs is maximal.

What happens when is odd? Trivially, put all the members into one club and the problem is solved. The second club is not needed at all. The addition of any more clubs will not be possible because of the same argument as above. Hence, one club is maximal in this case. Thus, in both the cases, no more than two clubs are required for a maximal membership of citizens.

**Fisher’s inequality**

We now discuss a result that is closely related to a slight modification of the rules in Oddtown. We remove the restriction on the size of the clubs and require that every two clubs share a fixed number of members. We assume that if two clubs have exactly the same members, then they are the same. Fisher’s inequality then states that the number of non-empty clubs is at most , similar to the result above. The proof of this inequality is slightly involved, although the basic principle is the same. We consider the membership vectors of the clubs and prove them to be linearly independent in some field, which in this case will be the field of real numbers .

To see how this proof works, an important observation needs to be made : *There is at most one club with exactly members.* Wondering why? Well, let’s assume otherwise and try to get a contradiction. Let there be at least two clubs with exactly members. Then each of these must have the same members by the condition of the problem. This contradicts the fact that these clubs are distinct (because of our assumption) and hence, we have proved that at most one club can have exactly members.

Now, with this critical observation in hand, we proceed with the proof as follows. Let be the clubs of size , respectively (assuming each of these is non zero). The size here refers to the number of members in the clubs. Represent by the set of common members in clubs and . Then, according to the given problem, for each pair with . We define the membership vectors of each club similar to the proof above as , where if citizen belongs to club and 0 otherwise. Clearly, in , we have the dot product of any and to be , if . All we have to do now is to prove that these membership vectors are linearly independent in .

To see this, assume they are not. Then, using standard practice in such proofs, assume there exist real numbers such that , where is the zero vector in . Then, we must also have , where denotes the 2-norm of the vector . Hence, we have , since the 2-norm can be written as the dot product of the vector with itself. We can rewrite this sum as . Using our critical observation now, each of the two terms on the right of this equation are non-negative, which implies that they are both identically zero.

Hence, we have (1) , which implies , and (2) . From (2), since at most one club, say can have exactly members, each whenever . However, then, from (1), , which contradicts the fact that the membership vectors are linearly dependent. Hence, Fisher’s inequality holds.

Fisher’s inequality has a number of cool applications. Two interesting examples are presented in [MIT307], where they prove the following : (1) For a fixed , let be a graph whose vertices are triples and is an edge if . Then does not contain any clique or independent set of size more than . (2) Suppose is a set of points in the plane, not all on one line. Then pairs of points from define at least distinct lines.

I hope you liked this post of mine. My next post will likely be on another interesting puzzle, maybe from one of the CACM journals. Until then, stay tuned. Ciao!

**References**

[MIT307] http://math.mit.edu/~fox/MAT307-lecture15.pdf

[UC726] http://ttic.uchicago.edu/~madhurt/courses/reu2013/class726.pdf

[UC07] http://people.cs.uchicago.edu/~laci/REU07/appuzzles.pdf

]]>

You and an opponent are playing a game using a row of n coins of values where is even. Players take turns selecting either the first or last coin from the row, removing it from the row, and receiving the value of the coin. Assume you play first. Following are some examples, assuming optimal play for both players:

- 2, 4, 8, 10 : You can collect maximum value 14 (10 + 4).
- 8, 20, 3, 2 : You can collect maximum value 22 (2 + 20).

Assume that your opponent does not always play optimally. In particular, if coins remain, then they choose the optimal move with probability (for example may decrease as grows). Describe an algorithm to optimize your expected winnings.

Sounds challenging, right! This was one of the bonus problems in the final exam, which still gives me chills every time I attack it. However, with a very clever use of dynamic programming, this problem can be attacked in a nice way. I will first focus on the easy case first, where . This means the opponent plays optimally always. For this easy case, before jumping to the dynamic programming solution, let us first try to convince ourselves why a greedy approach won’t work (this is standard practice with dynamic programming problems).

**Greedy (non optimal) solution for **

To see why a greedy solution won’t work for this problem, we have to find an example in which the greedy solution fails to provide an optimal solution. I will formalize my proof once we understand this example.

So, the problem at hand is to devise a sequence of coin values on which the greedy approach fails. Recall that the greedy approach picks the coin of highest value from either end of the sequence, alternating between the players. Assuming I go first, consider the sequence , the same sequence as in the example above. The greedy approach makes me pick coins with values 8 and 3, making my total value 11. However, as pointed above, the optimal value is 22, which is significantly higher. KMN!

So why does the greedy approach not work? To see this, we use this trick of constructing a sequence , which given a value , gives me a total value of if I use the greedy approach, but a value in the optimal play. This would prove that we will need to dig a bit deeper to solve this problem. Here is how this construction works.

Let be given as the value. (Note : The lower bound on is just a technicality. The solution can be generalized to all easily, although I will spare that discussion here. Also, I am assuming that coin values can be arbitrary rational numbers and not necessarily integers. However, I again claim that extending the solution to integer coin values is fairly trivial and hence, I leave that discussion from this post.)

Consider this sequence : . For example, given , the sequence we are constructing is . Using the greedy solution, the total value I obtain is 21 + 19 = 40, as required. However, an optimal strategy will yield me a value of 19 + 30 = 49. Generalizing this, the greedy strategy yields a total value of , while the optimal play yields , which is more than when . So, what on Earth is this optimal strategy!!

[Enter dynamic programming.]

**Dynamic programming solution for **

In cases of optimization like these, if greedy doesn’t work, dynamic has to. Ofcourse, this comes at the cost of efficiency and the requirement of precomputing the entire solution in the beginning, but hey! correctness is way more important than efficiency here. So, let’s take a deep breath and dive in.

Let and be a sequence of coin values. Assuming I play first, define to be the maximum value of the coins I can obtain using an optimal play in the sequence only. Clearly, if , then and when , then . Another easy case is . We are interested in the case when . Let me first state the solution and then justify it.

High five if you just rolled your eyes! Understanding this solution isn’t hard if you keep in mind that both players are playing optimally. This means both of them are trying to maximize their total values. Hence, when I am presented with a choice of two coins, I need to look ahead into what possible strategies my opponent can play next, and then choose a move that maximizes my gain over all those strategies. The *max *in front of the solution justifies this fact. I can either choose coin to gain value or I can choose coin to gain value . This choice will depend on what my opponent may do next. Let us discuss these two cases separately.

Case I : If I choose coin , my opponent has to choose a coin from values . If he chooses , I will then be making my choice from values , in which case I gain , by defn. Else, if my opponent chooses , I will have to make my choice from the values, in which case I gain . Since the play is optimal for both of us, I will assume my worst case for his choice and minimize over these two cases. Hence, the equation on the top.

Case II : Works similarly.

Clever, right! The beauty of this solution lies in the fact that even without knowing what my opponent will choose, just knowing his strategy is able to let me exactly compute what he will go for. Note that using techniques like memoization, I can precompute my optimal value in just time, which is linear in the number of coins. Once I have this precomputed solution, all I need to do is make my evil laugh! Bwahahaha!!!

**Not so optimal opponent**

All good so far, but beware! Common sense is not so common in this world. Finding such an intelligent player may be rarer than we want. So, why not give the poor opponent some slack and allow him to deviate from optimality with some probability. More formally, let be the probability that when coins remain, the opponent chooses to play optimally, and not otherwise.

An important assumption to compute the expected value I gain here is that I know ahead of time for all . If not, I may be in trouble. We can argue about how practical is it to assume this, but we can always *engineer* the game to make this assumption hold. I will simply let my opponent flip a coin in which the probability of heads is when he has to choose a coin from coins. Refer to my previous post to ensure that I can always have such a *coin* with me.

Now, assuming that I have found a friend who is ready to bow to my rules of this game, how can I compute my expected value at the end of the game? Well, not so tricky this time! The *min* in the equation will just be replaced by *max* when the opponent is not playing optimally. Voila! Here we go. Thus, out final solution is that , when , it is when and if . Remember that I am still playing optimally. For , we will now have the following. (Note, here stands for the expected value of .)

Double high five if you rolled your eyes again! One more crazy step? Why assume that I play optimally? I have to be realistic right? Similar to what I am assuming for my opponent, let me assume that with probability , I play optimally when given coins, and not optimally otherwise. I will then use a similar trick, where I will choose the *max* in the front of the equation above with probability , and I will choose *min* otherwise.

We’re done! So much for choosing coins! As you saw, dynamic programming has the potential to make us rich. Of course, at the cost of going through the math. An interesting question before I close this post. What if I do not know and/or ? As stated earlier, I can *engineer my opponent* to play such that I know the probability of optimal play every time he makes a choice, but to be more considerate of his feelings, I need to make myself more adaptive of his moves. Every time I realize that he is not picking the coin which the optimal strategy suggests, I will have to revise my knowledge of his probability of erring on the side of non-optimality. We have to agree that such a strategy will only work when the number of coins is large, to begin with, in which case I can hope to come close to guessing his exact probability of erring. In any case, realism is not a friend of mathematics, especially when it comes to modeling human behavior.

Thanks for reading this post. Stay tuned for my next post, which is likely to be on some cool application of linear algebra in a puzzle type setting. Until then, ciao!

]]>

Although the problem is not new (to me as well), this time I found myself with a good solution for the same, which I though would be nice to share here. I will try to provide my implementation (in Python3) of the problems and also discuss some theoretical aspects of the more interesting problem : *What is the minimum number of biased coin flips required to generate an unbiased flip? *I will be referring to the paper [MU08] for this.

**Unbiased Flips from biased coins**

So, let’s talk about the easy case first. Given an unbiased coin, how do I simulate a biased flip. Slight formalization : Let a coin flip be either or , where and for some . Thus, a flip is said to be unbiased if and biased, otherwise.

We have two sub-cases here. First, when is not known to us, and second, when it is. For the first case, we have to find a way around simulating an unbiased flip without making any assumption about in our algorithm. An important motivation for dealing with such a case is when dealing with sources of randomness whose *accuracy* is not known. With this respect, I will be briefly discussing the following question : *How can we deal with biased coin flips on a bit-precision computer?* In other words, is never really a *perfect* real number, when working on modern computers, due to the bit precision. We can only make it as small as the smallest floating point number that the computer can represent in its word size. So, how does our solution change in such a situation? Turns out that the solution for unknown works here as well. Try reasoning this out yourself (or leave a comment otherwise) after reading the algorithm below.

Consider flipping the biased coin twice. Note that among the four possible outcomes, the events and are equally likely. Each occurs with probability . Thus, the following algorithm immediately becomes a candidate solution to our problem.

def unbiasedFlipCase1(): (x,y) = (flip(), flip()) if x==y: return biasedFlipCase1() else: return x

Here, is a function that returns with probability and , otherwise. Recall that is unknown to us, and we don’t care about it either as long as we are assured that all calls to are identical and independent of all previous calls. To see why this solution works, it will suffice if we show that . This can be easily seen through the geometric sum .

Neat, right! So the next time someone flips a coin for you to make some decision and you are not sure if you trust the coin, flip it twice and do as the above algorithm suggests and you can always be sure of an unbiased outcome. One can also see that the expected number of recursive calls before this function returns something is computed as . Can we do better than this? Turns out that if is unknown, we probably can not.

Let us now turn our focus to the known case. Wondering why I am discussing this after the seemingly harder case? The reason(s) will be apparent shortly. Note that when we know , we can take advantage of this knowledge to produce an unbiased bit in fewer expected number of steps than the above algorithm, which also works perfectly fine here. A little math before we proceed with this thought : let us look at the variance of the number of recursive calls of the above function. This is easily calculated to be . Hence, by using Chebyshev bounds from [AS72], the probability that the number of recursive calls differs by more than from the mean is at most , which decreases quadratically as fast as . Hence, it is unlikely that this number deviates too much from the mean, implying that the solution is efficient. But, it is the most efficient we can get? The answer is negative. We can do *much* better than this. Let’s see how.

Recall the concept of *entropy* and *information* from any of your undergraduate probability theory classes. The entropy of a biased coin flip is given by , which denotes the average information gained on one flip of this biased coin. You can now see where I am going with this. [MU08] shows that the most we can get out a biased coin is this information, which directly implies that flips of the biased coin will produce one unbiased flip. (Note that [MU08] was not the first person to prove this result. However, I find the discussion in this report very approachable.)

For , we have attains its maximum value of , which gives . Hence, one coin flip suffices, which justifies well with intuition. As deviates in either direction, decreases and hence, the number of biased coin flips increases. Once we have these flip outcomes, a decision is made to return or using the Advanced Multilevel algorithm as in [MU08]. Comparing with , here is the plot I obtained.

As you can see, the entropy based solution always produces lesser number of biased coin flips. Here, I have only varied in increments of , but the graph remains similar for more refined values as well. However, the entropy-based algorithm is quite complicated to understand (at least for me). If you have any interest in coding theory or information theory, help me understand it!

**Biased Flips from an unbiased coin**

Let us now turn look at the case of simulating a biased coin flip from an unbiased one. On the face of it, this problem seems to be more of a mathematical puzzle than one having any perceivable real word application, but that’s not even remotely true. A ton of applications require us to make decisions which are not always equally favorable to all its possible outcomes. A very simple example being as follows. Suppose we want to simulate the outcome of the sum of a pair of six-sided die. Clearly, all outcomes here are not equally likely. It becomes increasingly impractical to throw two die everytime we want a sample. If only we had a way around!

Unbiased coin flips to the rescue! It turns out that *any* biased coin flip can be simulated by an appropriate number of unbiased coin flips. The only caveat here is that the unbiased coin flips must be perfectly random, or else we land into the territory of another very interesting question : *Given a biased coin with , what is the minimum number of coin flips required to generate a biased flip with for some ?* Although I will not spend much time on this question in this post and assume that all unbiased coin flips are available perfectly to me, a brief answer to this question is to consider the maximum amount of information that is obtained about the coin to be simulated from a single flip of the coin used in the simulation. Then, the problem becomes very similar to the technique discussed in [MU08], as described above.

So, how should we pray to the perfection of our unbiased flip in order to be bestowed with a biased Gold coin? Let us discuss an *easy-to-understand* algorithm to begin with. Recall that if we were instead given a uniform random number generator (RNG), say , which produces a real number in the range uniformly at random, we could have easily been able to simulate a biased flip by the following algorithm.

def biasedFlipFromRNG(p): u = rand() if (u<=p): return 1 else: return 0

This works because the probability that a uniformly generated random number is no more than is exactly . However, the world is not so generous to bestow us with such a Godly RNG. Lose no hope! There is an easy fix.

We can simulate the algorithm above *exactly* by treating our sequence of ‘s and ‘s, as generated by repeatedly flipping our unbiased coin, as the binary representation of a special number, which I call . I use the tilde sign to partially indicate that it is just an estimate of the actual in the algorithm above. With this approach, all we have to do is flip the unbiased coin, say times (starting with , of course) and record the outcomes in an ordered tuple , and then obtain our estimate at the end of iteration as . Now comes the trick! Let be a number whose binary expansion matches that of up to bits and then it is all zeros. For example, if and , then since the first two bits in the binary expansion of are zero. Compare with . If it so happens that , then repeat this process with . Otherwise, return if and if not. The rationale behind this approach is that once becomes less than , no further addition of bits will make it larger. Similarly for the case when is smaller. Only when the two are equal, can we make no decision. The algorithm to do the above is summarized in the code below.

def getBinDigit(i,p,q): if (q + 2**(-i) <= p): return 1 else: return 0 def biasedCoinFlip(p): flip = 1 value = 0 while(True): unbiasedFlip = unbiasedCoinFlip() decimalDigit = getBinDigit(flip,p,value) if decimalDigit: value = value + 2**(-flip) if unbiasedFlip < decimalDigit: return 1 if unbiasedFlip > decimalDigit: return 0 flip = flip+1

Here, the function simulates the unbiased coin for us. Note that in the code above, I do not produce the entire tuple of bits for every time. Instead, I generate bits only if I require them. A natural question that arises here is the expected number of iterations of the loop before a bit is returned by the function. But first, let us convince ourselves that the probability that this function returns is indeed .

To prove this, let us assume that the binary expansion of has bits . Note that we are *only* considering the bits after the *decimal* point, since . Now, the question becomes the following. Given some and a sequence of unbiased bits , what is the probability of the event ? Clearly, this is equal to , since is fixed. Hence, with probability , we enter into yet another iteration of the loop. Once we are done with these iterations, we return if , which is the same as . This happens with probability if and probability otherwise. Thus, the probability that we return after iterations of the loop can be written as .

We are almost there! All that remains to be done is to sum the above expression for , which gives , by definition. Hence, the algorithm is correct. Phew! One last thing before I wrap up. *How efficient is this algorithm? *We can calculate the expected number of loop iterations before returning using the expression above as . Wait! This seems like a tricky sum. However, we can compute an upper bound for this sum. Writing and using for all , we get that the average number of loop iterations before the algorithm returns a bit is no more than . This may not be tight, but at least it tells us that in fewer than two iterations, the algorithm will return a bit, which is pretty fast if you ask me! A tighter analysis may be non-trivial since I am not aware of any closed form expression for in terms of and . If you do, please let me know!

So people! I hope you enjoyed reading this post as much as I enjoyed writing it. Leave your comments below and I will try my best to address them. I have of course left out a lot of details and other cool algorithms to tackle the problems above, but I want to confess that I understand what I presented the most. I would love to hear about more techniques. Do stay tuned for my next post, in which I will talk about one of my other favorite topics : *Dynamic programming. *Until then, ciao!

**References :**

[MU08] Mitzenmacher, Michael. “Tossing a biased coin.” (2008).

[AS72] Abramowitz, M. and Stegun, I. A. (Eds.). *Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing.* New York: Dover, p. 11, 1972.

]]>

I will be attending the Mid South Theory Day 2016 at the Louisiana State University on 9th December, 2016 to give a talk on our latest result in the field of secure multiparty interactive computation. Stay tuned for an awesome algorithm that compiles a noise free protocol (asynchronous) into a robust protocol that can tolerate a bit flipping oblivious adversary with an unbounded budget, with high probability while incurring a overhead that’s within log factors of the conjectured optimal. I will upload the slides here shortly.

]]>

A simple heuristic that strikes now is to order the pages, that contain the given input query, on the basis of their in-degrees. This way, we are giving more weight to pages that are referenced by a large number of pages in the WWW, indicating a sense of *importance*, or in this case, possible *authority* over the other pages. However, an easy hack around this situation is for the owner of a page to create a bunch of dummy purposes whose sole purpose is to have links pointing to the page of interest. This increase in in-degree will enhance the authority of the page to an arbitrary extent and make it appear as a search result for whatever query this owner wants.

The solution to the problem is to try creating a focussed subgraph of the WWW upon which the search will be performed. This graph should be small (to not overload the servers) and must contain many of the strong authorities (pages that are most relevant to the query). Kleinberg makes an attempt at this construction through the use of principal eigenvectors of and , where is the adjacency matrix for a reduced subgraph that is constructed as follows. A primary assumption made here is that has a non-zero spectral gap, which implies that the graph is expanding [KOT12].

Assume that the WWW is represented as a directed graph where the pages form the vertices and there is an edge if page has a link to page . Then, given a broad-topic query , determine a subgraph on which the search query will be efficient to run. To obtain :

- Select a parameter , say 200, and obtain highest ranked pages for using a text-based search engine (e.g. Alta Vista at the time). Call this set (augment this with the links between pages in to obtain a graph). Note that is small (by keeping small) and potentially contains many strong authorities (relying on our faith in the search engine used). However, fails to be rich in relevant pages. This is because of the problems in text based searching where queries like “What is a good search engine?” will probably never return any of the existing search engine websites in the output because most of these websites do not contain the words “search engine” in them.
- Expand by adding pages (and links) that enter and leave and call this new graph . We have now added potential relevant pages to our graph under the assumption that these “neighbor” pages contain information that is crucial to the query.

Once this graph has been obtained, some more tweaking needs to be done to avoid returning pages that contain many navigational links. For example, a page that contains many links to various parts of itself (frequently used in pages containing long articles with sections), will be assumed to be of high authority because we are biasing our pages based on their indegree. Hence, obtain the final graph by removing all those edges from which connect pages in . Finally, order pages in in decreasing order of in-degree as an estimate of their authority.

The final trick employed by Kleinberg to weight authority of a page relative to its neighbors in is to mark those set of pages as *strong authorities* which have a high in-degree in and have a significant overlap in the pages linking to them. The idea is based on the existence of *hubs*, which are major sources of links to these high-in-degree pages. Essentially, this ensures that a page is authoritative if it pointed by a strong hub, and a hub is strong if it points to an authority. To work around this circularity, Kleinberg operates an iterative algorithm to update the *degree of authority* and *degree of hub-ness*, through parameters called *authority weight* and *hub weight*, respectively:

- To each page in , assign an authority weight and a hub weight .
- Normalize these weights to maintain the invariant .
- (-operation) Update the authority weights as . Here, is the set of edges in . Essentially, this is saying that the authority weight of the page is updated as the sum of hub weights of all the pages pointing to it.
- (-operation) Similarly, update the hub weights as .
- Normalize both and for each page .
- Repeat steps 3-5 until convergence.
- Return the top pages (for required ) that have the highest authority weights.

The paper presents a proof that the weight vectors will converge (assuming a non-zero spectral gap in ) and provides experimental evidence to show this typically happens within 20-30 steps in practice. Clearly, this clever reduction in the search space for the query to a graph that is very likely to contain the strongest authorities and many relevant pages has siginificantly reduced the search time as well as improved the quality of search results (as is evidenced by the results in the paper). This *cleverness* is hence, undoubtedly, used as a starting point for the giant of search engines today.

**References :**

[KLEIN99] Kleinberg, Jon M. “Authoritative sources in a hyperlinked environment.”*Journal of the ACM (JACM)* 46.5 (1999): 604-632.

[KOT12] Kotowski, Marcin, and Micha l Kotowski. “Lectures on expanders.” (2012).

]]>