Formal Theories for Reasoning Agents
The term agent usually describes computer systems or programs that are capable of independent action and rational behaviour in an open and often unpredictable environment. Thus, agents tend to be very complex systems and explaining and predicting their behaviour is not an easy task. We therefore tend to ascribe mental attitudes to them such as knowledge, beliefs, desires and obligations which are usually ascribed to humans.
It is assumed that the agent has a "mind" and has goals or desires which are based upon its view of the world and the information it possesses, and it will perform actions that will lead it to the achievement of its goals (principle of rationality). Using mental notions such as knowledge and beliefs (the intentional stance) we can try and formulate theories and describe what is an agent, what are the desired properties of an agent, and how these properties and the agent's reasoning can be represented in a precise formal language. However, formalizing axiomatic theories of agents is a non-trivial task.
This project is concerned with formalizing axiomatic theories of reasoning agents that can be described as having an informational and a motivational part. The informational part of the agents consists of knowledge and beliefs and the motivational of desires and intentions. The agents have also self-referential capabilities. Including self-reference to axiomatic theories however may easily lead to paradoxes and inconsistencies and this is one issue that requires attention. A number of issues arise when considering the dynamics and interrelationships between the four notions of knowledge, belief, intentions and desires. By considering different relations different types of agents can be described which may turn out to be suitable for different kinds of applications.
|Tuesday, 26 October 2004|