The game of Go has a high branching factor that defeats the tree search approach used in computer chess, and long-range spatiotemporal interactions that make position evaluation extremely difficult. Development of conventional Go programs is hampered by their knowledge-intensive nature. We demonstrate a viable alternative by training neural networks to evaluate Go positions via temporal difference (TD) learning. Our approach is based on neural network architectures that reflect the spatial organization of both input and reinforcement signals on the Go board, and training protocols that provide exposure to competent (though unlabelled) play. These techniques yield far better performance than undifferentiated networks trained by self-play alone. A network with less than 500 weights learned within 3000 games of 9x9 Go a position evaluation function superior to that of a commercial Go program.
Russell Greiner,et al.
Computational learning theory and natural learning systems
Practical Issues in Temporal Difference Learning
Ben J. A. Kröse,et al.
Learning from delayed rewards
Robotics Auton. Syst..
The Integration of A Priori Knowledge into a Go Playing Neural Network
Donald Geman,et al.
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
IEEE Transactions on Pattern Analysis and Machine Intelligence.
John N. Tsitsiklis,et al.
Encyclopedia of Machine Learning.
Gerald Tesauro,et al.
TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play
D. A. Mechner,et al.
All Systems Go
All systems go.
The Health service journal.
Bernd Brügmann Max-Planck.
Monte Carlo Go
Fredrik A. Dahl,et al.
Honte, a go-playing program using neural nets
Lawrence D. Jackel,et al.
Backpropagation Applied to Handwritten Zip Code Recognition
C. D. Gelatt,et al.
Optimization by Simulated Annealing
Arthur L. Samuel,et al.
Some Studies in Machine Learning Using the Game of Checkers
IBM J. Res. Dev..
Takayuki Ito,et al.
Neocognitron: A neural network model for a mechanism of visual pattern recognition
IEEE Transactions on Systems, Man, and Cybernetics.
Herbert D. Enderton.
The Golem Go Program
Terrence J. Sejnowski,et al.
Temporal Difference Learning of Position Evaluation in the Game of Go
James A. Anderson,et al.
Neurocomputing: Foundations of Research
Peter Dayan,et al.
Improving Generalization for Temporal Difference Learning: The Successor Representation