H. Chase Stevens Logo

Evolving Game AI – Part 3 (Algorithm Implementation and Issues)

posted on: Sunday, February 17, 2013 by

This is part three of a three-part series. For the first installment, see Part One (Overview). For the second, see Part Two (Design and Genome).

Genetic algorithm implementation details

Our genetic algorithm had an incredibly large search space to explore, given this, it was imperative that multiple instances of the algorithm be run in parallel. Several different populations were maintained across multiple computers, with the standard population consisting of 150 AIs. The most successful members of different populations were occasionally manually merged into a single population, so as to allow for cross-population competition. Particularly successful members were also manually archived for later inclusion in a sample pack of pre-evolved AIs (available now on StratLoc's homepage). Overall, while many hundreds of thousands of games of StratLoc were played as part of the genetic algorithm, it was possible only to go through 40 or 50 generations of evolution per population.

Contrary to most genetic algorithms, one of the easiest parts of evolving AIs for StratLoc was the creation of the fitness function. Given that our goal was to have AIs capable of being competent players of the game, it made sense to rank AI success based on the percentage of played games they were able to win, when pitted against other (randomly selected) AIs in the same gene pool.

After each AI had participated in at least 5 games, all AIs which had a below-average fitness rating were discarded from the population. The empty slots were re-filled by children of various successful AIs. Three different methods were used to mate AIs. The first took the first half of the genome of one parent and combined it with the second half of the second parent's genome. The second mating method was similar, but instead of splitting the genomes evenly it would take a random number of bytes from the genome of one parent, then fill in the rest of the genome using the remaining parent. The third, final mating method was a simple "zipper" operation whereby the bytes of the child genome alternated in their parental source. Mutation was performed after mating on 5% of all children. If selected for mutation, each byte of an AI's genome would have a 6.67% chance of being replaced by a new, random byte.

Issues during evolution process

As the AIs were being evolved, members of the development team would occasionally take several members of a population and begin a game using them. Several days into AI generation, we observed that, although the AIs seemed to be successfully winning games of StratLoc against each other, their play was severely uninteresting. AIs never seemed to make any cities or even build mines. Instead, the metagame of the populations had seemed to favor the evolution of zerg-rush-like strategies. In the case of StratLoc, this meant spending the initial resources given to a player on a military unit instead of a worker (with which to cultivate an economy). The AIs would then be in a position of being unable to afford anything else, and were stuck with a single anti-air unit. However, the fast production of said unit would more often than not allow them to defeat other AIs who were "properly" playing the game. When several AIs deployed the same rush strategy, the game essentially became a matter of which AI could get their unit to opponents' undefended cities first.

That this was allowed to happen was a mixture of several failures on my part. The first was a misunderstanding of what we were really attempting to evolve: while the fitness function used by the genetic algorithm was successfully producing AIs that could win the game, what we wanted were AIs that were interesting and challenging to play against for humans. This is to say, while rushing may have technically been the best strategy for StratLoc, it certainly wasn't one that made for enjoyable and engrossing gameplay. Solving this involved awarding not only a single point to AIs for winning games, but also additional partial points for each city owned at the end of the game. Doing this meant that even AIs that didn't flat-out win games could still be rewarded for having built (or at least captured) a large number of cities; it also meant that, overall, games wherein more cities were produced ended up awarding higher fitness ratings to participant AIs.

The second issue highlighted by the evolution and dominance of the rush strategy was one of game balance, for which I was also responsible. To counteract the success of the rush, I adjusted the costs of units and initial starting resources for players such that it was impossible to build a military unit on the first turn. In this way, the only possible path to building military units was to first build a worker with which to develop an economy, then go from there. While this simple change would have eventually resulted in AIs which were more economically-minded during the early game, the prevalence of the rush strategy amongst the AIs would have meant that, for many generations, AIs would continue to try (unsuccessfully) to build military units at the game's start. It was therefore necessary (for the sake of time, at least) to hard-code actions for the AIs for the first few turns of the game, which would lead them to build workers and create mines. While this may have in some sense ruined the "purity" of the genetic algorithm, it ultimately made the game more interesting and playable for humans and prevented the AIs from having to re-evolve more sensible strategies for the beginning of the game.

This concludes my series on the evolution of StratLoc's AIs. The full source code of StratLoc, as well as sample AIs (and the game itself) can all be found here. If you find yourself with any questions, or perhaps would like to talk about your own experiences evolving game AIs, please feel free to contact me.