The Consciousness of CRONOS
Within the project, we are approaching the problem of machine consciousness by two distinct routes; both use Cronos and Simnos, but in very different ways. David Gamez has focused on the issue of deciding whether a machine might be conscious. His analysis has led him to develop some new technical methods for the assessment of machine consciousness, and he is currently implementing SpikeStream, a large spiking neural network simulation for which Simnos will provide the development environment, and Cronos will serve as the final testbed. See below for a description of David's approach. Owen Holland has explored a rather different strategy based on the fact that an intelligent agent needs to contain an internal model of itself capable of interacting with an internal model of the world. In Owen's work, which has some similarities with Thomas Metzinger's philosophical views, Simnos will provide the basis for Cronos's internal model, and if the theory is correct, it is Simnos that will become conscious rather than Cronos. See here for a more detailed description of Owen's approach.
Once CRONOS, SIMNOS and SpikeStream have been completed, the final stage of our project will be to examine the system for system for signs of consciousness and describe its phenomenology. Over the last two years we have been developing an approach to this that attempts to answer both the a priori question about whether robots are capable of conscious states, and empirical questions about their degree of consciousness at any point in time and the contents of this consciousness. Our approach to answering these questions will now be covered in a little more detail.
One of the most common questions that people ask about a purportedly conscious artificial system is: "It is behaving in an apparently purposive and even conscious manner, but is it really conscious?" The discussion then shifts to an examination of the type of architecture that is used to create the system's behaviour. If the behaviour is produced by the population of China communicating with radios and satellites or through some arbitrary manipulation of symbols, then people are likely to say that the system is not conscious and has just a surface appearance of consciousness. On the other hand, if the system is using biological neurons to produce its behaviour, then people are much more likely to attribute real consciousness to it.
It might be thought that we will be able to attribute consciousness in a more systematic manner when we have worked out what it is about our biology that makes us conscious - our proteins, neurons, functions or representations, for example. Once we have done this, we can look inside the robot to see if these consciousness-producing properties are present or not. At the very least we might hope to find systematic correlates of consciousness in the brain that reliably indicate phenomenal states. Unfortunately, as Moor and Prinz point out, there are some potentially irresolvable difficulties with empirically separating out the correlates of consciousness, which are likely to prevent us from making any progress at all in this direction and so we may never be able to say with certainty what needs to be inside a machine to make it conscious.
To address these difficulties, we have developed an ordinal probability scale that enables us to rank different machine architectures according to the likelihood that they are phenomenally conscious. This systematisation of our intuitions enables us to compare the possibility of consciousness in CRONOS with the likelihood of consciousness in other non-human systems. More information about this scale can be found in a recent paper. The ordinal probability scale can be viewed at www.syntheticphenomenology.net.
Once the a priori question about consciousness in CRONOS has been answered, there remains the empirical question about how much consciousness the robot is experiencing on a moment to moment basis and the contents of this consciousness. Our solution to this is to use three XML representations to describe the system's structure, its response to stimuli from the environment and its mental content at any point in time. These make it possible to analyse the system for phenomenal states using theories of consciousness, such as Aleksander's axioms, Metzinger's constraints, Damasio's core consciousness and Tononi's phi This avoids many of the problems with describing non-conceptual states and it enables systematic comparisons to be made between the mental content of CRONOS and that of other robots and humans. More information about the XML approach to synthetic phenomenology can be found here.
The website is maintained by Richard Newcombe.