SuperCooperators. Roger Highfield

SuperCooperators - Roger  Highfield


Скачать книгу
time was ripe to pin down the idea.

      When, for example, Karl and I made the simulation more realistic and allowed for mutations, or mistakes in an evolving population of players, then we saw cooperation and defection wax and wane over time, as those with a good reputation are actually undermined by indiscriminate altruists who help anyone, no matter how well or badly the latter have behaved in the past. Then, free riders—unconditional defectors—invade until discriminating cooperators cycle back in. Given my earlier work on the Prisoner’s Dilemma, I was not surprised by this. But anyone unfamiliar with the field would have found it striking how the degree of cooperation always rides a seesaw of cycles.

      Importantly, we found that natural selection favored strategies, called Discriminating strategies, that pay attention to the reputations of others. These strategies prefer to interact with people who have a good reputation. Thus natural selection (acting in the framework of indirect reciprocity) promotes social intelligence: observe others, learn about them, understand who did what to whom and why.

      Karl and I also made an intriguing discovery that underlined how, when people act on their convictions, it can come at a cost. Refusing help to a free rider or other defector lowers the score of discriminating players so that, even though they may have acted for good reason, they might come over as Bad Samaritans. A colleague at work lets you down badly and you snap at her. From the perspective of fellow workers in a calm open plan office, your angry outburst makes you appear out of control. Or you may decide not to help a tramp because he whispered an insult at you. To an onlooker on the other side of the road, however, it looks like you have turned your back on a poor and deserving vagrant. This also diminishes your likelihood of being helped in turn.

      The bottom line of our theory was that an act of altruism will only evolve when the shadow of the future—that is, the expectation of coming gains—exceeds the cost. This idea could in turn be summed up by a simple mathematical relationship. The evolution (emergence) of cooperation can occur if the cost-to-benefit ratio is exceeded by the probability of knowing someone’s reputation, if you like. Karl and I submitted our work to the prestigious journal Nature. The paper was accepted without much bother and was published in 1998. In its wake came many more papers on the subject of indirect reciprocity, including experimental confirmation.

      In this way, our walk on the Kahlenberg had turned out to be a eureka moment, one of the most romantic and best-known feelings described in accounts of research. The vanishingly unlikely part of a eureka story is not the pounding heartbeat that comes with a novel insight but the awareness that you really did have a Big Idea, one that had an impact. And there’s the rub. This tends to creep up rather slowly in science. Karl and I were lucky because sometimes the full significance of a eureka moment will only emerge much later. In fact it can often take years for an idea to become concrete. Sometimes longer than a lifetime. I was once moved by a biographer’s words about the Austrian composer Franz Schubert: how “a later world would give him his due, slowly though it came to him at first.”

      THE EVIDENCE

      There’s a telling joke among scientists that every new theory has to pass through three phases of “acceptance”: first, it is completely ignored; second, it is obviously wrong; and third, it is obviously right, but everyone knew that anyway. Karl and I were fortunate. We did not become the punch line of this old joke—at least not this time.

      A couple of years after our walk, we found ourselves writing a comment for the journal Science on a clever piece of experimental research that had provided backing for our Nature paper on indirect reciprocity. Working at the University of Bern in Switzerland, Claus Wedekind and Manfred Milinski had started out with seventy-nine first-year students who were blissfully unaware of concepts such as reciprocal altruism and invited them to take part in a game in which they had the option to donate money to other individuals in the group.

      The game consisted of encounters between pairs of students who were connected by a computer network. One student was the “donor,” the other the “recipient.” If the donor paid one Swiss franc out of his account, the recipient would receive four. Thus the cost for the donor was 1 SFr and the benefit for the recipient was 4 SFr. As ever, for productive cooperation the benefit must exceed the cost. Alternatively the donor could decide to pay nothing and, of course, the recipient would not receive a bean. When making his decision about whether to give or to hang on to his money, the donor was informed about what the recipient donated in previous rounds. For example, a donor could learn that her recipient was stingy and never gave a thing, or was relatively generous and gave two out of three times. In order to exclude the effects of direct reciprocity, the experiment was arranged in such a way that the same two students did not meet again.

      The outcome of the experiment was convincing. Wedekind and Milinski found that even when there is no chance of direct reciprocity players are generous to each other provided that they have an opportunity to keep tally of the actions of their fellow player. We cooperate more with those who have a good reputation. As a result, people who started off by being generous ended up with a high payoff. We like to give to those who have given to others. Give and you shall receive!

      THE MORAL SPECTRUM

      Let’s examine one subtlety of my computer simulation of indirect reciprocity. If you see a Bad Samaritan and refuse to help him, you yourself could end up looking like another Bad Samaritan who in turn would be rejected by others (even though you had a very good reason to be a Bad Samaritan). A smarter rule should distinguish between justified and unjustified defections and should therefore take into account the reputation of the receiver too: someone withholding help from a “bad” player should not damage his own reputation as a result.

      One way to extend the work Karl and I had done was to study the effects of these more sophisticated rules. To make the problem tractable, it helps to assume that there are only two kinds of reputation: good and bad. In this world of binary moral judgments there are four ways of assessing donors in terms of “first-order assessment”: always consider them as good, always consider them as bad, consider them as bad if they give and good otherwise, or consider them as good if they give and bad otherwise. Only the last option can lead to cooperation based on good reputation.

      Second-order assessment rules take into account the reputation of the receiver too, so we are now able to consider the wider circumstances; as mentioned already, it can be deemed good to refuse help to a bad person. There are sixteen of these second-order rules. There are also third-order rules, which depend additionally on the score of the donor (after all, a person with a poor reputation might try to “buy” a good one by being more generous to those with good reputations). And so on. In all, there are 256 third-order rules.

      Once we have assessed the players to the first, second, or whatever order we decide, we then have to work out what to do. Do we give help, or do we walk on by? This is decided by a so-called action rule. The action rule depends on the recipient’s score and on one’s own (there are four possible combinations of the two scores and thus a total of sixteen action rules). For example, you may decide to help if the recipient’s score is good or your own score is bad. You might reason that by doing this you might increase your own score and therefore increase the chance of receiving help in the future.

      A strategy is the combination of an action rule and an assessment rule. Given the above, we obtain 16 times 256, which is 4,096 strategies. That is a lot. Nonetheless, this universe of strategic possibilities has been explored in a remarkable study at Kyushu University in Fukuoka that was the basis of the doctoral thesis of a brilliant theoretician, Hisashi Ohtsuki, who will also feature in later chapters.

      Ohtsuki’s adviser was the formidable Japanese mathematical biologist Yoh Iwasa. During my first visit to Japan, almost every person I met had introduced himself or herself as being a student of Iwasa. I became curious about this beacon of inspiration. I wanted to meet the professor who was “the number one in Japan.” Yoh himself always jokes that, while most Japanese names mean “the great” or “the brilliant,” his actually means only “the mediocre.” But again he’s being modest. In fact, his name signifies “the golden mean,” the most desirable position of perfect balance.


Скачать книгу