Keynote speech I: Co-evolutionary learning in game-playing

Co-evolution has been used widely in automatic learning of game-playing strategies, e.g., for iterated prisoner's dilemma games, backgammon, chess, etc. It is a very interesting form of learning because it learns by interactions only, without any explicit target output information. In other words, the correct choices or moves were not provided as teacher information in learning. Yet co-evolutionary learning is still able to learn high-performance, in comparison to average human performance, game-playing strategies. Interestingly, the research of co-evolutionary learning has not focused on its generalisation ability, in sharp contrast to machine learning in general, where generalisation is at the heart of learning of any form. This talk presents one of the few generic frameworks that are available for measuring generalisation of coevolutionary learning. It enables us to discuss and study generalisation of different co-evolutionary algorithms more objectively and quantitatively. As a result, it enables us to draw more appropriate conclusions about the abilities of our learned game-playing strategies in dealing with totally new and unseens environments (including opponents). The iterated prisoner's dilemma game will be used as an example in this talk to illustrate our theoretical framework and performance improvements we could gain by following this more principled approach to co-evolutionary learning.