Principles Of Evolution
Survival Of The Fittest
An extremely popular concept in nature is that a species will survive if it is fit for its environment. An important distinction is that being fit for an environment does not necessarily equate to being stronger, faster, or stealthier. For example, rodents are often slower and weaker than their predators. The reason they've survived, in part, is because of their smaller size which allows them to hide from predators. Rodents that hide better typically survive longer, enabling them to reproduce more often. Eventually rodents that are bad at hiding are killed off and rodents that are good at hiding survive, until the entire species is collectively better at hiding. This process of encouraging good traits over time is called Natural Selection, where individuals with very high fitness survive.
When a member of a species reproduces, it passes down traits to its children. This is why if both of your parents need glasses, you are likely to also need glasses within your life. Each person's genes is some form of a combination of their parents' genes. Sometimes when genes are passed down, they experience something called mutation, where certain genetic digits are changed slightly, resulting in a slight change to the individual. Usually mutations are not helpful (for example, genetic disease), but sometimes they can improve the fitness of an individual. In combination with natural selection, mutations and genetic inheritance can lead to major improvements to a species of long periods of time.
A genetic algorithm is a method to manipulate some function or member of a species to be of a high fitness - or to do some task well. A genetic algorithm utilizes both main principles of evolution. The algorithm used will vary from task to task, but typically employs a generation - a set of members designed to do a task - which attempts to complete the task at hand and is given a fitness score. The members of the generation that get the highest fitness are given a higher probability of reproducing. A new generation is produced based on the past generation, combining some members, and adding in some small mutations. Eventually, the species evolves to maximize the fitness function. This is extremely similar to how gradient descent works, taking a few small random steps to the optimal solution.
Here is an example of a genetic algorithm that attempts to match a word using random mutations and the fitness function.
Genetic algorithms can be used as an alternative to backpropagation in the optimization of neural networks. A generation of networks is created with random weights and biases. Each network attempts to perform its intended task and is given a fitness score based on how well it did. The best networks reproduce into the next generation and are mutated slightly, adjusting the weights and biases up or down slightly.
In neuro-evolution there are many methods of mutation that can be applied to a network. The simplest and most common is to add or subtract a small number from mutated weights. This method works well, but is certainly not the only one, and employing other methods of mutation may prove to be beneficial depending on the intended task of the neural networks. These are the other most common methods of mutating neural networks:
Multiply some weights or biases by a random value close to 1. (e.g. 0.8 - 1.25)
Replace some weights or biases with a completely new value.
Negate some weights or biases.
Swap some weights with other weights.
There is no definitive way of deciding on a method of mutation for a neural network, such decisions come down to intuition and skill. Employing multiple methods of mutation is common.