IEEE CIG 2005 Keynote Talk
There are four keynote talks arranged:
Professor Jordan B.
Dynamic & Evolution Machine Org
Computer Science Department
For the past decade my students and I have worked on coevolutionary learning,
both in theory and in applications such as learning game strategies in Tic Tac
Toe or Backgammon, solving problems like sorting
networks and CA rules, and designing robot bodies and Brains.
Coevolution tries to formalize a computational "arms race" which would lead to
the emergence of sophisticated design WITHOUT an intelligent designer, or his
fingerprints left in the choice of data representations and fitness function.
Coevolution often takes the shape of a game tournament where the players who do
well replicate (with mutation) faster than the losers. The fitness function,
rather than being absolute, is thus relative to the current population. We have
had successes, but we find that often, the competitive dynamics lead to
winner-take-all equilibria, boom and bust cycles of memory loss, and mediocre
stable states where an oligarchy arises which survives by excluding innovation
rather than embracing it. Many researchers have proposed algorithmic methods for
overcoming these limitations, involving diversity maintenance, memory for elite
players, and so forth, but something is wrong if we have yet to have a
convincing mathematical or computational demonstration that competition without
central government can lead to sustained innovation.
Is there a missing principle, a different mechanism design in which
self-interested players can optimize their own utility, yet together the
population keeps improving at the game? If so, and if we discover this in the
realm of computational games, would it transfer it to human social organization?
Creating Intelligent Agents through Neuroevolution
The University of Texas at Austin
The main difficulty in creating artificial agents is that intelligent behavior
is hard to describe. Rules and automata can be used to specify only the most
basic behaviors, and feedback for learning is sparse and nonspecific.
Intelligent behavior will therefore need to be discovered through interaction
with the environment, often through coevolution with other agents.
Neuroevolution, i.e. constructing neural network agents through evolutionary
methods, has recently shown much promise in such learning tasks. Based on sparse
feedback, complex behaviors can be discovered for single agents and for teams of
agents, even in real time. In this talk I will review the recent advances in
neuroevolution methods and their applications to various game domains such as
othello, go, robotic soccer, car racing, and video games.
Challenges in Computer Go
Department of Computer Science
University of Alberta
Computer Go has been described as the "final frontier" of
research in classical board games. The game is difficult for computers
since no satisfactory evaluation function has been found yet. Go shares
this aspect with many real-life decision making problems, and is therefore an
ideal domain to study such difficult domains. This talk discusses the
challenges of Computer Go on three levels: 1. incremental work that can be done
to improve current Go programs, 2. strategies for the next decade, and 3. long
Opponent Modelling and Commercial Games
Professor Jaap van den Herik
Department of Computer Science
To play a game well a player needs to understand the game.
To defeat an opponent, it may be sufficient to understand the opponent's weak spots and to be able to exploit them. In
human practice, both elements
(knowing the game and knowing the opponent) play an important role. This
article focuses on opponent
modelling independent of any game. So, the domain of interest is a
collection of two-person games, multiperson
games, and commercial games. The emphasis is on types and roles of
opponent models, such as speculation,
tutoring, training, and mimicking characters. Various implementations are
given. Suggestions for learning
the opponent models are described and their realization is illustrated by
opponent models in game-tree search. We then transfer these techniques to
commercial games. Here it is crucial for a successful opponent model that
the changes of the opponent's reactions over time are adequately dealt with.
This is done by dynamic scripting,
an improvised online learning technique for games. Our conclusions are (1)
that opponent modelling has a
wealth of techniques that are waiting for implementation in actual commercial
games, but (2) that the games'
publishers are reluctant to incorporate these techniques since they have no definitive opinion on the successes of
a program that is outclassing human beings in strength and creativity, and (3)
that game AI has an entertainment
factor that is too multifaceted to grasp in reasonable time.
<proceedings paper co-authored with H.H.L.M. Donkers, P.H.M.