This is part of a series entitled Genetics In Action.
I’m in the process of creating a new talk, entitled Genetics In Action: Evolutionary Algorithms. I have no idea where, outside of work, I’m going to give this talk; still, it’s a topic I love and maybe I’ll sneak it in at some conference somewhere.
Here’s my abstract:
Evolutionary algorithms help us solve optimization problems where we know what the answer is, but don’t know how to get that answer. In this talk, we will look at how different evolutionary algorithms apply the concepts of genetics to discover formulas and patterns. Specifically, we will look at genetic algorithms and genetic programming, digging into how they work and solving a number of problems with each. We will also include a crash course on basic genetics, just in case high school biology isn’t fresh in your mind.
With that in mind, the next several posts will relate to evolutionary algorithms, how they work, and a few of the examples I’m going to use in my demos.
What Makes For A Good Solution?
Revamping a blog post from a decade ago, good solutions to problems tend to have seven positive qualities. They are:
- Correctness – Abstracting away from imprecisions in measurement, the goal is to find the a priori correct solution.
- Consistency – Do the same problem, get the same answer.
- Justifiability – Cause implies effect; results come from premises through the application of logical rules.
- Certainty – Chance should ideally not be involved in the solution.
- Orderliness – The process is a logical, step-by-step process in which each step leads to the next.
- Parsimony – Given several possible solutions which have the same explanatory power, the simplest is usually the best.
- Decisiveness – Once a solution is found, the problem is solved. In particular, there is a well-defined solution concept.
These are principles that John Holland talks about in his groundbreaking work, Adaptation in Natural and Artificial Systems. They are seven properties that I think we can all agree are good things to have, and we’d expect them to be important in coming up with solutions to problems.
Evolutionary algorithms fail every one of these criteria. Despite that, they are very useful tools for solving problems.
What Evolutionary Algorithms Are Out There?
There are four major classes of evolutionary algorithms. They are:
- Genetic algorithms
- Genetic programming
- Evolutionary programming
- Gene expression programming
For this series, I’m going to focus on the two that I’m most familiar with: genetic algorithms and genetic programming.
What Is An Evolutionary Algorithm?
Evolutionary algorithms are programming techniques which take advantage of biological metaphors to solve a particular class of problem. At a high level, the algorithmic mechanisms aren’t that difficult to understand, although in practice, you can go significantly deeper than what I’m going to cover in this series.
What Kinds Of Problems Does An Evolutionary Algorithm Solve?
There is a particular class of problems that evolutionary algorithms are great at solving. These are problems with a number of characteristics:
- They include very large search spaces. Let’s suppose you can set the values of variables x1…x20, and each variable can be an integer between 0 and 9. One way of finding the best solution would be to loop through each potential solution, starting from (0, 0, 0, …, 0, 0, 0) up through (9, 9, 9, …, 9, 9, 9). There’s just one problem: doing this would take 10^20 trials. 10^20 is 100,000,000,000,000,000,000. That’s a pretty big number.
- There is a known way to score answers. Evolutionary algorithms are supervised learning algorithms; in order to evolve a solution, we need an end goal.
- The solution is too complex to do by hand. If you can write it out yourself (or calculate it easily), you don’t need a complex algorithm!
- We expect that we have the tools necessary to come up with a solution. As we’ll see, evolutionary algorithms only work if we provide the right building blocks.
- The environment is regularly-changing, meaning the appropriate solution changes regularly. This is not a requirement, but in this scenario, evolutionary algorithms perform much better than many of their static counterparts.
- The type of problem is a “hill-climbing” problem, where fitness is approximately continuous. They can handle “wavy” fitness functions, so we definitely can deal with multiple peaked fitness functions. But if the fitness function is overly discrete (mostly made up of non-continuous jumps in the fitness function), the likelihood of success goes down.
Okay, So What Kinds Of Problems Does An Evolutionary Algorithm Solve?
Hey, wait, I already answered that! Oh, fine, you want a list of specific examples… Very well. The short answer is that evolutionary algorithms are most useful for solving NP-Hard problems, where we don’t have a one-size-fits-all solution.
- Optimization Problems – Finding the minimum or maximum value of a mathematical function, circuit layout, antenna design, the traveling salesman problem, the knapsack problem.
- Machine Learning Problems – Finding optimal weights for neural nets, generating rules for classification engines, training robotic sensors.
- Economic Models – Portfolio bidding, game theory.
- Ecological Models – Host-parasite co-evolution, symbiosis, resource flow.
These are particular examples of cases in which we see the characteristics I described above.
Why Not Use Evolutionary Algorithms For Everything?
Evolutionary algorithms aren’t suitable for all types of problems. If you don’t have a way of programmatically judging correctness (e.g., if you’re performing an unsupervised learning task such as clustering), then evolutionary algorithms simply won’t work. Also, evolutionary algorithms share a set of negative tendencies:
- There is no guarantee that an evolutionary algorithm will find a solution, even if there really is one.
- There is no guarantee of good performance or that the algorithm will finish in a reasonable time.
- The answers may differ each time you run the algorithm.
- It is often hard to tell when the algorithm should stop: I’ve seen cases where there’s stasis for 40-50 generations and then a sudden breakthrough.
- You can get stuck in local minima/maxima with evolutionary algorithms (though EAs are usually better about breaking free of these than other techniques like simulated annealing).
- There is no guarantee that the solution your algorithm will provide will be in the easiest-to-understand form. There may be ways to simplify the formula or decision tree further, as evolutionary algorithms do not prevent things like contradictory branches or vestigial code. The “best” solution is equivalent to the solution which comes the closest to our desired solution, and usually the size of the evolved result is irrelevant in that regard.
Despite all of these issues, evolutionary algorithms are very powerful techniques. Over the course of the next few blog posts, I’m going to go into more detail. The next post will provide a crash course on genetics. After that, we’ll focus on genetic algorithms and then genetic programming.