My watch list
my.bionity.com  
Login  

NeuroEvolution of Augmenting Topologies



NeuroEvolution of Augmenting Topologies (NEAT) is a neuroevolution technique—a genetic algorithm for evolving artificial neural networks—developed by Ken Stanley while at The University of Texas at Austin. It notably evolves both network weights and structure, attempting to balance between the fitness and diversity of evolved solutions. It is based on three key ideas: tracking genes with historical markings to allow crossover between different topologies, protecting innovation via speciation, and building topology incrementally from an initial, minimal structure ("complexifying").

Contents

Performance

On simple control tasks, the NEAT algorithm often converges to effective networks more quickly than a variety of other contemporary neuro-evolutionary techniques and reinforcement learning methods.[1] [2]

Complexification

Conventionally, neural network topology is chosen by a human experimenter, and a genetic algorithm is used to select effective connection weights. The topology of such a network stays constant throughout this weight selection process.

The NEAT approach begins with a perceptron-like feed-forward network of only input and output neurons. As evolution progresses, the topology of the network may be augmented by either adding a neuron along an existing connection, or by adding a new connection between previously unconnected neurons.

Implementation

The original implementation by Ken Stanley is published under the GPL. It integrates with Guile, a GNU scheme interpreter. This implementation of NEAT is considered the base reference for implementations of the NEAT algorithm.

Extensions to NEAT

rtNEAT

In 2003 Stanley devised an extension to NEAT that allows evolution to occur in real time rather than through an iteration of generations as used by most genetic algorithms. The basic idea is to put the population under constant evaluation with a "lifetime" timer on each individual in the population. When a network's timer expires its current fitness measure is examined to see whether it falls near the bottom of the population, and if so it is discarded and replaced by a new network bred from two high-fitness parents. A timer is set for the new network and it is placed in the population to participate in the ongoing evaluations.

The first application of rtNEAT is a video game called Neuro-Evolving Robotic Operatives, or NERO. In the first phase of the game, individual players deploy robots in a 'sandbox' and train them to some desired tactical doctrine. Once a collection of robots has been trained, a second phase of play allows players to pit their robots in a battle against robots trained by some other player, to see how well their training regimens prepared their robots for battle.

Phased Pruning

An extension of Ken Stanley's NEAT, developed by Colin Green, adds periodic pruning of the network topologies of candidate solutions during the evolution process. This addition addressed concern that unbounded automated growth would generate unnecessary structure.

References

  1. ^ Kenneth O. Stanley and Risto Miikkulainen (2002). "Evolving Neural Networks Through Augmenting Topologies". Evolutionary Computation 10 (2): 99-127
  2. ^ Matthew E. Taylor, Shimon Whiteson, and Peter Stone (2006). "Comparing Evolutionary and Temporal Difference Methods in a Reinforcement Learning Domain". GECCO 2006: Proceedings of the Genetic and Evolutionary Computation Conference.

Bibliography

  • Kenneth O. Stanley and Risto Miikkulainen (2002). "Evolving Neural Networks Through Augmenting Topologies". Evolutionary Computation 10 (2): 99-127.
  • Kenneth O. Stanley and Risto Miikkulainen (2002). "Efficient Reinforcement Learning Through Evolving Neural Network Topologies". Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2002).
  • Kenneth O. Stanley, Bobby D. Bryant, and Risto Miikkulainen (2003). "Evolving Adaptive Neural Networks with and without Adaptive Synapses". Proceedings of the 2003 IEEE Congress on Evolutionary Computation (CEC-2003).
  • Colin Green (2004). "Phased Searching with NEAT: Alternating Between Complexification And Simplification".
  • Kenneth O. Stanley, Ryan Cornelius, Risto Miikkulainen, Thomas D’Silva, and Aliza Gold (2005). "Real-Time Learning in the NERO Video Game". Proceedings of the Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2005) Demo Papers.
  • Matthew E. Taylor, Shimon Whiteson, and Peter Stone (2006). "Comparing Evolutionary and Temporal Difference Methods in a Reinforcement Learning Domain". GECCO 2006: Proceedings of the Genetic and Evolutionary Computation Conference.
  • Shimon Whiteson and Daniel Whiteson (2007). "Stochastic Optimization for Collision Selection in High Energy Physics". IAAI 2007: Proceedings of the Nineteenth Annual Innovative Applications of Artificial Intelligence Conference.
 
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "NeuroEvolution_of_Augmenting_Topologies". A list of authors is available in Wikipedia.
Your browser is not current. Microsoft Internet Explorer 6.0 does not support some functions on Chemie.DE