*This is part of a series entitled Genetics In Action.*

In the last post, we looked at solving a basic genetic algorithms problem: finding a global maximum for a function consisting of a single real-valued variable. Today, we are going to look at a more complex interaction and solve one of life’s more interesting challenges: the ~~knapsack~~ Holyfield problem.

### The Holyfield Problem

The Holyfield Problem stems from the classic Sega Genesis video game, Evander Holyfield’s Real Deal Boxing.

The purpose of this game was as complicated as it was challenging: to make the best boxer there ever was and defeat your father figure in the ring. There’s a Freudian element in there I’m not particularly comfortable with, but I don’t get to choose the plots of video games.

Anyhow, as mentioned, you want to be the best there is, and the only way to be the best is to train the best. That’s why the game offers you a number of training opportunities, each of which builds up at least one of the four core skills necessary to punch large men repeatedly in a ring.

Above, we see eight of the options available, and of the three I’ve selected (punching bag, jump rope, and loose weights), what they would do for my namesake. All in all there are 19 training options available, and figuring out the best combination sounds like a job for a genetic algorithm!

### Defining The Problem

Let’s take a moment and build out a proper definition of the problem. As mentioned, we have 19 potential training options available to us. Each training option has some positive effect on at least one of four stats: power, stamina, speed, and defense. Each training option also comes at a cost, and we have a fixed budget. Our goal is to maximize a fighter fitness function (which I’ll describe below) contingent upon remaining at or below the total allowable cost.

Note that the search space here is the product set of alleles. We have 19 genes, each of which has 2 alleles, so it’s 2^19. That means that there are 524,288 potential training regimens. Many of these will be over cost, but that still leaves a large number to sift through.

### Building The Chromosome

Each organism in this example will have a single chromosome with 19 genes, representing each of the 19 training options. Because we select training options without replacement, the best representation for each gene is a binary set of alleles: either we have selected the training option or we have not selected it; we cannot select it twice.

Let’s start writing some code:

if(!require(GA)) { install.packages("GA", repos = "http://cran.us.r-project.org") library(GA) } if(!require(ggplot2)) { install.packages("ggplot2", repos = "http://cran.us.r-project.org") library(ggplot2) } training_options <- data.frame( item = c("Exercycle", "Head Guard", "Health Club", "Iron Gum Shield", "Jog Machine", "Jump Rope", "Karate", "Loose Weights", "Multi Gym", "Power Gloves", "Protein Diet", "Punch Bag", "Running Shoes", "Sparring", "Speed Bag", "Speed Boots", "Step-O-Matic", "Track Work", "Vitamins"), stamina = c(2, 0, 1, 0, 2, 1, 0, 0, 1, 0, 1, 2, 2, 0, 1, 0, 2, 1, 1), speed = c(1, 0, 1, 0, 1, 1, 2, 0, 1, 0, 1, 0, 1, 0, 1, 4, 0, 2, 1), power = c(0, 0, 1, 0, 0, 0, 0, 2, 1, 3, 1, 2, 0, 0, 0, 0, 2, 0, 1), defense = c(0, 2, 1, 3, 0, 0, 3, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0), cost = c(2, 2, 3, 1, 3, 1, 2, 1, 3, 3, 2, 2, 1, 3, 2, 3, 3, 2, 2)) training_balance <- 8

We have a total training budget of 8 and can see the training options available. Let’s view this in a table format, as that’s easier to read:

item | stamina | speed | power | defense | cost |
---|---|---|---|---|---|

Exercycle | 2 | 1 | 0 | 0 | 2 |

Head Guard | 0 | 0 | 0 | 2 | 2 |

Health Club | 1 | 1 | 1 | 1 | 3 |

Iron Gum Shield | 0 | 0 | 0 | 3 | 1 |

Jog Machine | 2 | 1 | 0 | 0 | 3 |

Jump Rope | 1 | 1 | 0 | 0 | 1 |

Karate | 0 | 2 | 0 | 3 | 2 |

Loose Weights | 0 | 0 | 2 | 0 | 1 |

Multi Gym | 1 | 1 | 1 | 0 | 3 |

Power Gloves | 0 | 0 | 3 | 0 | 3 |

Protein Diet | 1 | 1 | 1 | 0 | 2 |

Punch Bag | 2 | 0 | 2 | 0 | 2 |

Running Shoes | 2 | 1 | 0 | 0 | 1 |

Sparring | 0 | 0 | 0 | 4 | 3 |

Speed Bag | 1 | 1 | 0 | 0 | 2 |

Speed Boots | 0 | 4 | 0 | 0 | 3 |

Step-O-Matic | 2 | 0 | 2 | 0 | 3 |

Track Work | 1 | 2 | 0 | 0 | 2 |

Vitamins | 1 | 1 | 1 | 0 | 2 |

Each option has a cost between 1 and 3 points and generates at least 1 point of skill. Here’s a sample chromosome, where I selected a few options:

sample_chromosome = c(1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0) training_options[sample_chromosome == 1, ]

The sample_chromosome is a vector of options, and we feed those sample chromosome values into the training_options data frame to extract only the options I selected:

item | stamina | speed | power | defense | cost | |
---|---|---|---|---|---|---|

1 | Exercycle | 2 | 1 | 0 | 0 | 2 |

4 | Iron Gum Shield | 0 | 0 | 0 | 3 | 1 |

5 | Jog Machine | 2 | 1 | 0 | 0 | 3 |

11 | Protein Diet | 1 | 1 | 1 | 0 | 2 |

This is a valid set of options, in that its cost is within our training budget. How good is it, though? Well, in order to figure that out, we need to define a fitness function.

### We Pump And We Pump

Our fitness function will explain to us just how good each candidate solution is.

EvaluateBoxerTraining <- function(chromosome) { # Each of the training stats follows a log form, so a balanced boxer is better. # Then, we'll emphasize speed over power over defense over stamina because reasons. eval_speed <- 2.2 * log2(chromosome %*% training_options$speed) eval_power <- 1.9 * log2(chromosome %*% training_options$power) eval_defense <- 1.6 * log2(chromosome %*% training_options$defense) eval_stamina <- 1.3 * log2(chromosome %*% training_options$stamina) eval_score <- eval_speed + eval_power + eval_defense + eval_stamina eval_cost <- chromosome %*% training_options$cost if (eval_cost > training_balance) return(0) else return(eval_score) }

Let’s walk through this for a moment. Starting with speed, we can see that there’s a function to determine the speed score. That function takes the natural log of `chromosome %*% training_options$speed`

. This is matrix multiplication of the chromosome vector times the speed vector in the training_options data frame. Going back to linear algebra, multiplying a 1xN matrix by a 1xN matrix gives us a 1×1 result. Because our chromosome is made up of 0s and 1s, what we end up doing is adding up all of the speed points that we’ve selected, so that’d be 3 in our sample chromosome above (1×1 + 0x1 + 1×1 + 1×1). As another example, for defense, the calculation is the same: (1×0 + 1×3 + 1×0 + 1×0 = 3). This is a nice way of eliminating an unnecessary for loop in R, and when you’re doing these calculations thousands of times, it can shave off enough time to make this worth knowing.

So we perform matrix multiplication to get the sum of our speed bonuses, which is 3. Then we take the natural log of 3 (which is 1.0986) and multiply it by 2.2, giving us a speed score of 2.4169.

Our fitness function is the summation of these four scores: the scores for speed, power, defense, and stamina. If we go over the training balance, we return 0 for the fitness score, as it’s an invalid combination; otherwise, we return the fitness score we calculated.

As a quick aside, the reason I take the natural log of each of these scores is that it gives us a nice property in R: if the speed value is 0, then the result is -Inf, which we can guarantee will not be the best answer. This requires our boxers to have a minimum value in all four attributes.

Looking at this fitness function, it seems that we want *ceteris paribus* to emphasize speed and power over defense and stamina. Speed and power give us better bonuses, so picking options like speed boots and power gloves would seem to be winners.

Going back to our sample, if I evaluate the sample’s fitness using `EvaluateBoxerTraining(sample_chromosome)`

, I get back a score of 9.041364. Is that a good score or a bad score? Who knows? We’ll have to test it against other scores to figure out if we’ve accidentally-not-accidentally picked the best set of training options.

### Starting A Boxing School

Now we want to build up our genetic algorithm and put tens of thousands of boxers to the test. Specifically, I am going to look at a population size of 160 candidate solutions over 100 generations, for a total of up to 16,000 searches—so if we looked at a random search mechanism with the same number of steps, we’d hit just over 3% of the total search space.

Our model is simple, though there are a couple of differences from the first genetic algorithm we saw.

model <- ga(type = "binary", fitness = EvaluateBoxerTraining, popSize= 160, run = 50, nBits = 19, pcrossover = 0.8, pmutation = 0.2) summary(model)

We are using binary genes, so the type is set to binary rather than real. We have defined our fitness function, and then have set a few parameters. Our population size is 160 boxers, and each boxer has a chromosome with 19 bits. Our crossover probability is 80% and our mutation rate is 20%. Finally, we will stop the simulation if the most-fit boxer has not improved in at least 50 generations. This gives us a short-circuit opportunity in case we find the best early on.

Viewing the summary, we end up with a result something like this:

+-----------------------------------+ | Genetic Algorithm | +-----------------------------------+ GA settings: Type = binary Population size = 160 Number of generations = 100 Elitism = 8 Crossover probability = 0.8 Mutation probability = 0.2 GA results: Iterations = 69 Fitness function value = 15.35445 Solution = x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 x16 x17 x18 x19 [1,] 0 0 0 1 0 1 1 1 0 0 0 1 1 0 0 0 0 0 0

We went a total of 69 generations and picked a training set which gives us a fitness score of 15.35445, way better than the 9.041364 we had above. We can look at the solution to see exactly which options the algorithm chose:

solution <- model@solution solution_options <- training_options[solution == 1, ] solution_options

item | stamina | speed | power | defense | cost | |
---|---|---|---|---|---|---|

4 | Iron Gum Shield | 0 | 0 | 0 | 3 | 1 |

6 | Jump Rope | 1 | 1 | 0 | 0 | 1 |

7 | Karate | 0 | 2 | 0 | 3 | 2 |

8 | Loose Weights | 0 | 0 | 2 | 0 | 1 |

12 | Punch Bag | 2 | 0 | 2 | 0 | 2 |

13 | Running Shoes | 2 | 1 | 0 | 0 | 1 |

Neither of our big speed/power boosts show up. Furthermore, using `colSums(solution_options[2:6])`

to sum up each column, the results are surprising:

stamina |
5 |

speed |
4 |

power |
4 |

defense |
6 |

Instead of skewing toward speed and power, the best solution actually skews toward defense and stamina!

Now that we’ve found the best solution, let’s look at the evolution of our solution space over time:

Notice that there are several stair-step increases in the genetic algorithm results. In the first few generations, the best score was a 0. Eventually, we jump up to 10, and then a few jaunts up to the 13-14 range before settling on our final score of 15.35445. It took more than 35 generations for the median to reach our peak fitness, and notice the erratic behavior of the mean: that’s because we have cases with -Inf results dragging down the mean. But just like before, we’re not concerned with the specific values of anything except the best-fit result, so that’s fine.

### Conclusion

This was a more complicated genetic algorithm, but it shows a classic example of how to solve a problem. We need to be able to define the problem in terms of genes, alleles, and a fitness function. If we can do that, then we will be able to feed those into the genetic algorithm. Many problems are not as easy to fit into a genetic algorithm as our problem was, but if you’re able to define the problem in these terms, you can solve a complicated optimization problem while looking at a tiny fraction of the entire search space.

One nice property that I did not emphasize in the above example is that genetic algorithms don’t work on heuristics like we do. In my explanation, I gave what I thought would be the best solution heuristic: look for high-speed and high-power training options and focus on them. It turns out that this was not in fact a winning strategy. We don’t really have a way of telling the genetic algorithm to look in a certain part of the search space (not directly, at least), and that’s a good thing. Although you might think that narrowing the search space would help the algorithm, what it really does is make it more likely that we find a local maximum instead of the global maximum.

Next up on our agenda is genetic programming.