Helping ANTS find food faster using the Golden Ratio!

Hi all,

After a (very) long time, I am back with another interesting post on a puzzle that I recently came across during a research project. It is a cool number-theory puzzle that has a very interesting application to efficient foraging i.e. in designing efficient algorithms to locate resources in an unknown area. I must confess that the credit for this formulation of the problem and most of the solution goes to my advisor Jared (and to Melanie and Matthew Fricke for introducing us to this problem). I only plan to give an overview of the main idea here. Our full paper in under submission and a draft will be available on my webpage soon. Hope you enjoy this post as much as I did writing it! 

Caution: As with all my posts where I discuss puzzles, I assume some math background and knowledge of standard computer-science terminology.

Imagine that an army of N ants have been secretly transported to an unknown location (all at the same place) where they have no idea about their surroundings. To make matters worse, these ants are heavily mutilated in the sense that their capability to smell food from a large distance or communicate with each other, through signals or otherwise, has been completely lost; and they are very hungry (perhaps from the long trip) and need to find something to eat fast. The ants have no idea about how many other ants have been subject to the same destiny (i.e. they do not know N), nor do they know anything about the distribution of food around them. The only capabilities these ants have are to (1) be able to navigate in any direction they want to and return back to the last location, and (2) be awesome at math! Of course, these ants can detect food when they step on it and detect collisions (without exchanging any information), but they cannot leave any pheromone trails (or similar) to communicate any information to the other ants. Each ant to its own! — Aah imagine the misery. 

Can we help these poor ants find some food to eat before they all starve?  

Imagine there is a big heap of sugar particles somewhere around where these ants are located (let’s call this location their nest). The distance of this heap and its size are all unknown to the ants. For simplicity, assume that the heap is circular and has diameter D and that the distance of its center to the nest is L. We will also assume that D = O(L). So the task here is for the ants to find the heap, without knowing L or D in advance. Is there a fast way to do so?

Let us start simple. Assume for now that L is known to the ants. How fast do you think we can find the food in this case? — Well, the simplest way is to pick an arbitrary direction and go until distance L, and then walk along a circle of radius L, centered at the nest. Certainly, at some point, the food will be located and the ants can satisfy their hunger. The maximum distance covered by any ant, in this case, is L + (2\pi L – (D/2)) = O(L). Not bad! Can we do better? Well, the minimum distance any ant has to cover is at least L – (D/2) = \Omega(L), before it even hopes to find food. Thus, asymptotically, this algorithm is tight and hence, no scope of improvement. 

Let us look into one more simple problem. Suppose L is unknown but D is known. What can we do now? — Well, intuitively, a larger D means a larger heap of food, which should make it easier to find. However, not knowing L seems to be a difficult roadblock to overcome, since no matter how large the diameter D is, if L – (D/2) is unknown (since L is unknown), we do not know how much minimum distance one must travel out from the nest.

Let us think of it this way. We know that the angle that the heap of food makes with the nest is proportional to D/L. So, if we just moved in a random direction (to say a very large distance), then we locate the food with probability proportional to D/L. 

Hmm! This suggests the following algorithm: Start with making S spokes of length 1, at random angles. Here, a spoke means going out from the nest in a straight line and then coming back along that same line. We will determine S in a short while. Once these spokes are made, double the spoke length, and repeat. Keep doing this until the food is found.

How long does this process take?

Let us call the time during which the spoke lengths remain the same as an iteration. Thus, in iteration 1, the spoke lengths are 1; in iteration 2, the spoke lengths are 2; in iteration 3, the spoke lengths are 4; … and so on. In short, the spoke lengths in iteration i are 2^{i-1}. If it takes a total of M iterations to find the food, then the maximum distance covered is S + 2S + 4S + \dots + 2^{M-2}S + 1 = 1 + (2^{M-1}-1)S (I am ignoring the factor of 2 to account for the return trip along the spokes). Now, we just need to find M and set S such that this distance is as low as we can make it.

  1. Clearly, the spoke lengths must be at least L – (D/2), or else the ants don’t even reach the food. Thus, M must be \Omega(\log L).
  2. For all iterations before this M, the likelihood of finding food along any spoke (or back) is zero.
  3. For an iteration starting from M, each spoke has some fixed probability p of finding food (assume only single ant (N = 1) here). Of course, p is a function of D/L and hence, is unknown, since L is unknown.

Since we are making S spokes in each iteration, the probability that the food is not found in a given iteration is (1-p)^S. Thus, the average number of iterations it will take to find the food is \frac{1}{(1-p)^S}, and hence, the average number of iterations is at least \log L + \frac{1}{(1-p)^S}. The distance covered in these many iterations is at least L + 2^{(1-p)^{-S}} (ignoring the constants). Now, for this distance to be comparable to what we had when L was known (which, recall, was O(L)), we need the term 2^{(1-p)^{-S}} to be O(L). This is true when (1-p)^S \geq \frac{1}{\log L}, or equivalently, when S \leq \frac{\log L}{-\log (1-p)}. We cannot set S to be a function of L since we do not know it. So, say we set S = 1 (cannot get lower than that!). Then, the distance we cover is at least L + 2^{(1-p)^{-1}} = \Omega(L + 2^{L/D}), which is clearly much larger than when only L was known. Thus, we see that handing unknown D can be expensive if we want to keep the total time low.

Can we do better than \Omega(L + 2^{L/D}) when both L and D are unknown? 

The answer is yes! We can actually go to as low as O \left( L + \frac{L^2}{D} \log \frac{L^2}{D} \right), but not any asymptotically lower than that (reporting the result for N=1 only). Thus, when the diameter is very large, say D = 0.99L, then we can find the food in time O(L + L\log L), which is faster than if you performed a spiral around the nest. In case D is small, say some constant, then the spiral is actually better for N = 1, since O(L + L^2 \log L) is larger than the time taken by the spiral, which is O(L^2). The problem is we do not know D vs. L in advance, and hence, we will err on the side of a slightly more expensive algorithm, since it optimizes for the case when D is large.

Let’s get back to our original problem now. I will only discuss spoke-like algorithms in this post — for more general information, stay tuned for the full paper.

Think of our problem as the ant deciding a sequence of spoke lengths which will find the food no matter what L and D are, in a small amount of time. Thus, for any given L and D, when the ant searches along spokes specified by this sequence,  it must eventually find the food. How do we construct such a sequence?

Let us make some seemingly-easy but important observations.

  1. The sequence must have all positive entries — of course, since spoke lengths have to be positive. The ants cannot travel along a spoke of length -2 🙂 
  2. For every integer m, the sequence must have at least one element of size at least m — if not, then the sequence will never be able to locate the food if L \geq m.
  3. Not only that but for every integer m, the sequence must have sufficiently many elements of size at least m — or else, it is possible that the food is still not located. The critical question is how many are sufficiently many. We will return to this in a while.

Thus, basically, the sequence of spokes that the ant must make should contain infinitely many distinct elements and for each element, sufficiently many copies of each to be able to handle any L or D. Remember, the challenge is still (1) defining sufficiently many, and (2) doing so without assuming anything about L or D. We can formulate this problem as follows:

Number-stream problem: Construct a sequence of positive integers in such a way that for any x and y, the total sum of the elements in the sequence before at least y elements of size at least x are seen is small. For example, suppose x = 1 and y = 2. Then, the algorithm must keep producing elements until at least 2 elements of size at least 1 are seen. We want to minimize the total sum of the elements until this condition is met. —- Why can’t we just produce 1,1? — We do not know x and y in advance 🙂

We call this the number-stream problem. In our paper, we show that the cost of this sequence must be \Omega(xy \log (xy)) for any x and y, and also give an algorithm that matches this lower bound. I will not go into the details of how we achieve this in this post (see the figure on top of this post for a hint), but I hope the connection of this problem to the problem of foraging is clear. I will, however, touch upon what sufficiently many means above, since that is an interesting contribution of our result.

As we saw in the example with known D but unknown L, even if we search along many spokes of sufficient length, that does not guarantee that the food is found unless we can ensure that these spokes are in some way getting closer to the food over time. In other words, we would like to ensure that these spokes are spaced apart from each other so that we search in many different directions. The problem is that if we keep the number of spokes of a fixed length constant, then as the spoke lengths increase, these spokes become further away from each other and hence, more and more area is unexplored at larger distances. Thus, as the spoke lengths increase, we must also increase the number of spokes.

However, we must be very careful here since if we increase the number of spokes by too much, then the ant might have to travel a large distance before the food is found. We need to carefully balance the number of spokes vs. the length of these spokes. This is where the cool trick comes.

Our idea is to ensure that the likelihood of locating the food increases with time, even if all the spokes are kept the same length (which is sufficiently large). To do this, all one needs to do is to make sure that the spokes get closer to each other with time. A naive way to do this is to remember the maximum angular separation between any two spokes taken so far and take the next spoke right in between these spokes. This ensures that we always decrease the separation between the spokes.

But remembering these separations can be very memory-heavy for the poor little ant. If only there was a way to do this without remembering much!

Well, as with all cool results in mathematics, there is always an age-old theorem that comes alive to help. We invoke the famous Three-Gap Theorem (also known as the Steinhaus conjecture — although it is no longer a conjecture) with the Golden Ratio for this. Without going into the details, what this theorem does for us is the following: if the ant just remembers the direction of the last spoke it makes, and for the next spoke, just adds 2 \pi \phi to the direction (in radians, anti-clockwise), where \phi is the Golden Ratio, then the spokes automatically keep getting closer with time. Not only that, the rate at which they get closer is optimal, as was shown by the ever famous Knuth in his book on Sorting and Searching 🙂 There it is! We are done.

The ant now knows how to make the spokes – choose directions based on the Golden Ratio instead of choosing them randomly (except for the first spoke) and just use the sequence produced by our algorithm to find the food as fast as it is possible with this approach 🙂 We actually conjecture in our paper that our algorithm is not an optimal spoke algorithm, but also an optimal search algorithm for this problem — we are yet to prove this though.

As an end note, I must mention that if you observed the capitalized ANTS in the title, it was not by mistake. The problem I discussed here was actually first formalized by Feinerman, Korman and their colleagues and termed as the Ants Nearby Treasure Search (ANTS) problem. What we do is optimize their result for large heap size and ensure tolerance against failures.

Anyway, I hope you enjoyed this post 🙂 If you did (or have anything you’d like to point out), please let me know in comments 🙂 Till then, tada!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s