Home
Accepted Papers
Keynotes
Tutorials
Submissions
Competitions
Registration
Program Committee
Local Information
Important Dates
Links


EPSRC Logo





































































































IEEE CIG 2005 Tutorials

The tutorials are scheduled for the afternoon of Sunday April 3.  All tutorials are included in the conference registration fee.


 

Particle Swarm Optimisation for Learning Game Strategies

Andries P. Engelbrecht
University of Pretoria

A coevolutionary particle swarm optimization (PSO) approach to evolve game strategies from zero knowledge will be presented. The approach lends from the coevolutionary approach used by Chellapilla and Fogel for training Checkers agents. The PSO-approach uses PSO to train a swarm of neural networks in a coevolutionary manner to approximate the evaluation function of leaf nodes in a game tree. We will show how the approach can be used to train game agents for Checkers, Bao, a probabilistic version of 3D tic-tac-toe, and the iterated prisoners dilemma. Different performance measures are discussed to evaluate the performance of the trained game agents against random playing agents.


Coevolving game strategies: How to win and how to lose

Evan J. Hughes
Cranfield University

This tutorial describes the application of co-evolution to two player games in order to discover effective value functions. The tutorial will describe the requirements for co-evolving value functions and typical co-evolutionary algorithm structures, with an emphasis on implementation issues. The main case study will look at the effects of different value function structures for the game of checkers, and give examples of structures that have been used for other games. The tutorial will examine value function structures that look very promising initially, but are extremely difficult to evolve - how to lose. We will then focus on structures that work, and describe why they work - how to evolve to win.


 

Temporal Difference Learning for Game Strategy Acquisition

Thomas P. Runarsson
University of Iceland

This tutorial covers the main aspects of applying temporal difference learning (TDL) to acquire value functions for two-player strategy games. While the most famous application of TDL is TD-Gammon, TDL is also capable of efficiently learning value functions for many other games.  We'll study how TDL can be used for neural network and table based value functions, using tic-tac-toe, Othello and Go as case studies.  C code for TD Othello will be supplied, as a complete example of how to apply TDL in practice.