SuperCooperators. Roger Highfield
regarded as a classic in the field, and deservedly so.
But did Axelrod’s computer tournament have anything to say about the real world? Yes. A real-life example of such a contest was reported in 1987, when Manfred Milinski, now the director of the Max Planck Institute for Evolutionary Biology in Ploen, Germany, studied the behavior of stickleback fish. When a big predator such as a pike looms, one or more of a school of sticklebacks will approach to see how dangerous it is. This “predator inspection” is risky for these scouts, but the information can benefit them as well as the rest of the school—if the interloper is not a predator or if it has just fed and is not hungry, the smaller fish don’t need to move away. Assessing whether it is necessary to flee seems foolish but is important because in their natural habitat there are many pike and other fish swimming about, so moving away is not always a good strategy: one can jump out of the way of one snapping predator into the jaws of another.
Milinski found that stickleback fish rely on the Tit-for-Tat strategy during this risky maneuver. If a pike shows up in the neighborhood, two sticklebacks often swim together in short spurts toward the open mouth of the predator to size him up. Each spurt can be thought of as a single round of the Dilemma. Cooperating in this game of chicken is best for both fish, since it cuts the risk of being eaten. This is due to the “predator confusion” effect: pike can waste valuable time when they have to decide which of two or more prey to strike first, a real-life version of the paradox of Buridan’s ass, the hypothetical situation in which a donkey cannot choose between two stacks of hay and so dies of hunger. Yet each little fish has an understandable incentive to hang back a little and let the other stickleback soak up more of the risk.
To investigate what was going through their little fishy heads, Milinski made ingenious use of a mirror. When held in the right place, it could create the illusion that a single stickleback was accompanied by another scout. By tilting the looking glass, Milinski could make it seem to a stickleback scout that his mirror-image “companion” was either cooperating by swimming alongside or falling behind and defecting, like the officer leading the charge who slowly slips behind his troops and out of harm’s way. The lead scout would often react to the apparent defection of its mirror fish by slowing down or turning tail, without completing its scouting mission. If the mirror image kept pace with the scout, the latter usually approached the predator more closely than it would if swimming alone.
NOISE
So far, so satisfyingly straightforward. But there is a problem with Tit for Tat, one that is not immediately obvious when using computer programs that interact flawlessly. Humans and other animals make mistakes. Sometimes their wires get crossed. Sometimes the players become distracted. They suffer mood swings. Or they simply have a bad day. Nobody’s perfect, after all. One type of mistake is due to a “trembling hand”: I would like to cooperate but I slip up and fail to do so. Another is caused by a “fuzzy mind”: I am convinced that this person was mean to me and defected in the last round, when in fact he did not. Perhaps I was confusing him with someone else. Trembling hands and fuzzy minds lead to what I call “noisy” interactions.
The significant role of noise for the evolution of cooperation was first pointed out in a paper in the journal Nature by Robert May of Oxford University, a brilliant former physicist who would come to exert a profound influence on theoretical biology. Bob (being Australian, he prefers “Bob”) is best known for the great strides he made in putting ecology on a mathematical basis. In his short essay he argued that evolutionary biologists should study the influence of mistakes on the repeated Prisoner’s Dilemma. He realized that the conclusions from a game that is perfectly played, as was the case in Axelrod’s tournaments, are not necessarily robust or realistic.
This is an important point. Even infrequent mistakes can have devastating consequences. When pitched against another player adopting the same approach, the Tit-for-Tat strategy can trigger endless cycles of retaliation. Since all it knows how to do is strike back at defectors, one scrambled signal or slipup can send Tit for Tat spiraling ever downward into vendettas that overshadow those seen in Romeo and Juliet, between the Hatfields and McCoys, or anything witnessed in Corsica, for that matter. The obvious way to end this bloody spiral of retaliation is to let bygones be bygones: for example, only to demand revenge now and again, or to decide it by the throw of a die. Inspired by this important insight, I would extend Axelrod’s pioneering work and incorporate the effects of noise to make it more true to life.
TAKE ADVANTAGE OF MISTAKES
As I studied for my doctorate with Karl, we devised a way to take confusion, slips, and mistakes into account. In the jargon, instead of the conventional deterministic strategies we used probabilistic strategies, where the outcome of the game becomes more fuzzy and random. We decided to explore the evolution of cooperation when there is noise by holding a probabilistic tournament in a computer, building on Axelrod’s pioneering work. The idea was to use a spectrum of strategies, generated at random by mutation and evaluated by natural selection.
All of our strategies were influenced by chance. They would cooperate with a certain probability after the opponent had cooperated and they would also cooperate with a certain probability after the opponent had defected. Think of it this way: we are able to put varying shades of “forgiveness” in the set of strategies that we explore. Some forgive one out of two times. Others one out of five defections, and so on. And some strategies, of course, are unbending. These Old Testament style strategies almost never forgive. As was the case with the Grim strategy, they refuse ever to cooperate again after an opponent has defected only once.
To study the evolution of cooperation, we seasoned the mix with the process of natural selection so that winning strategies multiplied while less successful rivals fell by the wayside and perished. The strategies that got the most points would be rewarded with offspring: more versions of themselves, all of which would take part in the next round. Equally, those that did badly were killed off. For extra realism, we arranged it so that reproduction was not perfect. Sometimes mutation could seed new strategies.
Now Karl and I could sit back and watch the strategies slug it out in our creation over thousands and thousands of generations. Our fervent hope was that one strategy would emerge victorious. Even though no evolutionary trajectory ever quite repeated itself, there were overall patterns and consistency in what we observed. The tournament always began with a state of “primordial chaos.” By this I mean that there were just random strategies. Out of this mess, one, Always Defect, would inevitably take an early lead: as is so often seen in many Hollywood movies, the baddies get off to a flying start.
For one hundred generations or so, the Always Defect strategy dominated our tournament. The plot of life seemed to have a depressing preface in which nature appeared cold-eyed and uncooperative. But there was one glimmer of hope. In the face of this unrelenting enemy, a beleaguered minority of Tit for Tat players clung on at the edge of extinction. Like any Hollywood hero, their time in the sun would eventually come: when the exploiters were left with no one left to exploit, and all the suckers had been wiped out, the game would suddenly reverse direction. Karl and I took great pleasure in watching the Always Defectors weaken and then die out, clearing a way for the triumphant rise of cooperation.
When thrown into a holdout of die-hard defectors, a solitary Tit for Tat will do less well than defecting rotters, because it has to learn the hard way, always losing the first round, before switching into retaliatory mode. But when playing other Tit for Tat–ers, it will do significantly better than Always Defect and other inveterate hard-liners. In a mixture of players who adopt Always Defect and Tit for Tat, even if the latter only makes up a small percentage of the population, the “nice” policy will start multiplying and quickly take over the game. Often the defectors do so poorly that they eventually die out, leaving behind a cooperative population consisting entirely of Tit for Tat.
But Karl and I were in for a surprise. In our computer tournaments, Tit for Tat–ers did not ultimately inherit the Earth. They eventually lost out to their nicer cousins, who exploited Tit for Tat’s fatal flaw of not being forgiving enough to stomach the occasional mishap. After a few generations, evolution will settle on yet another strategy, which we called Generous Tit