SparkCognition joined fellow researchers this year at the 2018 proceedings of the Genetic and Evolutionary Computation Conference (GECCO) in Kyoto, Japan. In recent years, evolutionary computation has arisen as a powerful new tool for solving complex tasks, such as supervised learning and reinforcement learning problems.
At SparkCognition, we are dedicated to harnessing and advancing the latest machine learning techniques with an eye for real-world applications. For us, one of these tools is evolutionary computation. We’re excited to share what we learned at the conference, as well as some of our recent research!
What is evolutionary computation?
Evolutionary computation is a family of optimization algorithms that incorporate principles from biological evolution. An evolutionary algorithm typically maintains a population of candidate solutions to a given problem. This population is iteratively refined over time by evolving members of the population over the course of generations and removing solutions which perform poorly, much like natural selection.
Some evolutionary algorithms also incorporate other concepts from biological processes. For example, NeuroEvolution of Augmenting Topologies (NEAT) is a genetic algorithm which simultaneously optimizes the structure and parameters of a neural network for a given problem. Genetic algorithms such as NEAT are a subclass of evolutionary algorithms which encode each solution as a set of properties, or genotype, which can be mutated and altered over time. NEAT leverages this genome representation to combine traits of high-performing candidate solutions (crossover) and to segment its population into species to preserve diversity and protect innovation (speciation).
The Power and Limitations of Evolutionary Computation
Evolutionary algorithms have been applied to a wide range of tasks in machine learning, producing many successes and even state-of-the-art performance.
For example, Salimans et al. recently showed that evolutionary strategies can produce policies that perform competitively with state-of-the-art policy gradient methods that utilize back propagation on popular reinforcement learning problems such as Atari game playing1. Additionally, they showed that evolutionary strategies were able to achieve these results much more quickly than alternative methods. This speedup was possible because popular reinforcement learning algorithms are difficult to parallelize (write in such a way that pieces can be executed by different processing devices at the same time for a faster result) whereas evolutionary strategies allow computation to be easily spread across many CPU cores.
Evolutionary algorithms have also yielded success in supervised learning tasks. For instance, NEAT has been leveraged as both a feature selection mechanism, responsible for removing irrelevant or redundant features in input data, and as a direct model building tool for evolving neural networks to solve binary classification tasks such as breast cancer diagnosis2,3. Variants of NEAT, such as HyperNEAT, have also been used to evolve neural networks for larger scale image classification problems4,5. HyperNEAT uses a compositional pattern producing network (CPPN) to generate the weights for a large number of connections in a neural network by producing connectivity patterns with symmetries and repeating motifs.
Despite their many attractive qualities and successes, evolutionary methods do have limitations. Specifically, though they tend to be easily parallelizable, they are also computationally expensive. Evolutionary methods must maintain and evaluate large numbers of candidate solutions, and for many problems this evaluation takes a significant amount of time and compute power.
Additionally, evolutionary algorithms may struggle to efficiently discover solutions when posed with very large or complex search spaces. In optimization, a search space is defined as the set of all possible solutions, good or bad, to a problem. When posed with large search spaces, evolutionary algorithms often become trapped in local optima, meaning that they get stuck at a suboptimal solution. This is an unfortunate limitation, because complex search spaces are quite common in practical applications. The ability to traverse such wide search spaces efficiently with evolutionary algorithms would be a significant improvement over the current state of the art.
In the next post, we will look at some of the specific advances made towards this goal that were discussed at GECCO, including surrogate models and search space visualization methods. We will also introduce some recent research from SparkCognition, also presented at GECCO, which brings evolutionary algorithms to the realm of real-world, supervised classification problems.
Read The State of Evolutionary Computation, Part Two here.
1 Salimans, Tim, et al. “Evolution strategies as a scalable alternative to reinforcement learning.” arXiv preprint arXiv:1703.03864 (2017). https://arxiv.org/pdf/1703.03864.pdf
2 Sohangir, Soroosh, Shahram Rahimi, and Bidyut Gupta. “Neuroevolutionary feature selection using neat.” Journal of Software Engineering and Applications 7.07 (2014): 562. https://file.scirp.org/pdf/JSEA_2014060514363160.pdf
3 Verzola, Matteo. Neuroevolution of augmenting topologies applied to the open racing car simulator. Diss. University of Illinois at Chicago, 2008. http://www.bcc.ufrpe.br/sites/www.bcc.ufrpe.br/files/Luiz%20França.pdf
4 Stanley, Kenneth O., David B. D’Ambrosio, and Jason Gauci. “A hypercube-based encoding for evolving large-scale neural networks.” Artificial life 15.2 (2009): 185-212. http://axon.cs.byu.edu/Dan/778/papers/NeuroEvolution/stanley3**.pdf
5 Verbancsics, Phillip, and Josh Harguess. “Generative neuroevolution for deep learning.” arXiv preprint arXiv:1312.5355 (2013). https://arxiv.org/pdf/1312.5355.pdf