[Home] [Tech Library]

2. Evolution in ecosystems

Evolution proceeds by the variation and selection of replicators. The most familiar example, of course, is biological evolution, in which genes are the replicators [7,8], mutation is the variation mechanism, and a variety of environmental pressures (predation, parasitism, competition for scarce resources) act as selection mechanisms. The following will explore a variety of other ecosystems, the nature of their replicators, and the nature of their processes of variation and selection.

2.1. Ecosystems Ecosystems provide contexts for evolution in two distinct senses. First, the ecosystem's replicators evolve in response to unchanging aspects of their environment such as climate and physical principles. Second, the replicators interact (through predation, competition, cooperation, and so forth), with each replicator providing part of the context for others. When replicators respond to external selective pressure, but not to each other, the result is an evolutionary system, but not an ecosystem. In such a non-ecological system, the subtlety and sophistication of the selective pressures are fixed.

The environmental richness and complexity generated when evolving entities can interact arguably leads to increased richness and complexity of the resulting system: selective pressures are then themselves the results of evolution. Analyzing such a system may be difficult, however. With many complex, multi-way feedback loops possible, can one be confident in the stability or direction of the overall system? This depends on the nature of the forces and feedback.

The analysis of ecosystems frequently involves non-intuitive secondary and tertiary effects. The Axelrod tournament provides a simple example of these effects.

2.2. The Axelrod tournament Robert Axelrod developed an ecosystem in which the entities interact in rounds of iterated prisoner's dilemma games [5]. To understand it, one must first understand the dilemma itself. Instead of the traditional scenario of prisoners being interrogated, Hofstadter's illustration with the following scenario seems more intuitive.

Two dealers arrange to trade items. Each dealer agrees to place his item in a bag and leave it at a designated place to be retrieved by the other dealer. The dilemma presents itself when each dealer considers how best to interact with the other-given that they are strangers, will not meet face to face for this exchange, and will never deal with each other in the future. As Hofstadter describes, each dealer reasons as follows:

"`If the [other] dealer brings a full bag, I'll be better off having left an empty bag, because I'll have gotten all that I wanted and given away nothing. If the dealer brings an empty bag, I'll be better off having left an empty bag, because I'll not have been cheated. I'll have gained nothing but lost nothing either. Thus it seems that no matter what the dealer chooses to do, I'm better off leaving an empty bag. So I'll leave an empty bag.' . . .And so both of you, with your impeccable (or impeccable-seeming) logic, leave empty bags, and go away empty handed. How sad, for if you both had just cooperated, you could have each gained something you wanted to have. Does logic prevent cooperation? This is the issue of the Prisoner's Dilemma." [emphasis in the original] [9]

The underlying strategic situation can be made precise in the following fashion: In a single prisoner's dilemma interaction, two players each choose between moves (termed cooperate and defect) in ignorance of the other's choice. If both cooperate, both are rewarded (in Axelrod's case, with a payoff of 3 points). If one cooperates and the other defects, the defector receives an even greater reward (5 points), while the cooperator receives nothing (0 points). If both defect, both are punished (by receiving only 1 point).

In a single move, each player has an incentive to defect regardless of the other player's move, but double-cooperation is better than double-defection. Overall, pairs of players that cooperate earn higher scores than those that do not.

In an iterated prisoner's dilemma game, two players go through a long series of moves, and can base their actions on the history of play. When one expects (and hopes for) further transactions with the other party, simple defection no longer seems as attractive. Indeed, by running a computer tournament, Axelrod showed that the logic of an iterated prisoner's dilemma actually fosters cooperation.

Robert Axelrod ran a Computer Prisoner's Dilemma Tournament based on the above rules. A diverse group of game theorists were invited to submit programs to play against each other in a round-robin of games, each averaging 200 single moves. After the first tournament, Axelrod circulated the results-including the nature of the winning program, judged by cumulative points-and solicited entries for a second tournament.

Axelrod's pair of tournaments may be described as a simple evolutionary ecosystem. The replicators were the programs themselves (or the strategies those programs embody), the variation mechanism was human ingenuity (since programs were modified between tournaments), and the selection criterion during a tournament was simply the number of points earned. Programs interacted with each other in an environment imposing certain rules, and their success depended on each others' behavior. Further, Axelrod went on to simulate the population dynamics of a set of programs, given the assumption that points earned determined the "population density" of that program in the next time period.

In both tournaments a very simple program won. That program was TIT FOR TAT, submitted by psychologist Anatol Rapoport. In the population dynamics simulation, the success of TIT FOR TAT was even more pronounced. Analyzing TIT FOR TAT's success can suggest how to analyze other ecosystems.

2.2.1. The triumph of TIT FOR TAT All sorts of strategies were submitted, including many which used complex reasoning based on past interactions, and one which responded randomly. The success of a strategy depended on whether it was:

  • Nice-never defected first,
  • Retaliatory-responded to defection with defection (thereby punishing defectors), and
  • Forgiving-eventually stopped defecting in response to cooperation.

TIT FOR TAT is the simplest example of a nice, retaliatory, and forgiving strategy. It cooperates on the first move and then does whatever the opposing player did on the previous move.

Other strategies can be classified according to which of the above traits they lack. A strategy which initiates a defection (and thereby is not nice) may be termed a con man, since it is trying to gain at the expense of (and with the foolish cooperation of) its opponent-the simplest con man always defects. A nice strategy which does not defect in response to a defection (and thereby is not retaliatory) may be termed a sucker, since it can be taken advantage of by con men.

Con men have an advantage over TIT FOR TAT in that they can fully exploit suckers, while TIT FOR TAT only cooperates with them. Given the mix of strategies submitted to Axelrod's tournaments, TIT FOR TAT won both. A con man strategy could have won, however, had the initial population included enough suckers. Since con men could have won in this case, how can one claim that TIT FOR TAT is fundamentally more viable than con men? Axelrod's population dynamics simulation helps answer this question.


Previous Next

[Home] [Tech Library]