Human Centred Robotics (HCR) concerns with the development of various kinds of embedded systems and intelligent robots that will be used in environments coexisting with humans. These systems and robots are mobile, autonomous, interactive and intelligent, and will be useful assistants / companions for people in different ages, situations, activities and environments in order to improve the quality of life.
The Essex HCR research group currently consists of academic staff, research staff, PhD research students, and some short-term visiting scholars. The group is aiming at
· Promote the integration of research, design and strategy to deliver cutting-edge science and technology, which are stimulating and challenging.
· Focus on the challenge of future intelligent systems and robots, namely safe operation and flexible human-robot interaction.
· Create a promising research work environment, with a clear vision, talented researchers and fascinating projects so that we could remain creative, well-motivated and full of imagination all the time.
HCR Group believes that Human Centred Robotics offers a proving ground where the most advanced ideas and design in intelligent systems, autonomous robots and Human-robot interface could be tested and put into operation. In other words, leading technology will emerge here and later transfer to many other application areas of intelligent systems and robots such as entertainment, healthcare, sport, rescuing and service.
Entertainment Robotics - Robotic fish powered by Gumstix PC and PIC
In nature, fish has astonishing swimming ability after thousands years evolution. It is well known that the tuna swims with high speed and high efficiency, the pike accelerates in a flash and the eel could swim skilfully into a narrow hole. Such astonishing swimming ability inspires us to improve the performance of aquatic man-made robotic systems, namely Robotic Fish. Instead of the conventional rotary propeller used in ship or underwater vehicles, the undulation movement provides the main energy of a robotic fish. The observation on a real fish shows that this kind of propulsion is more noiseless, effective, and manoeuvrable than the propeller-based propulsion. The aim of our project is to design and build autonomous robotic fishes that are able to reactive to the environment and navigate toward the charging station. In other words, they should have the features such as fish-swimming behaviour, autonomously navigating ability, cartoon-like appearance that is not-existed in the real world.
· This project is funded by the London Aquarium Limited, £150,000
· The new carp robotic fish at Essex funded by the London Aquarium Ltd., £43,000
Here is the video to show The Robotic Fish Project at Essex.
The main aim of this joint project is to explore and develop the advanced technology needed for a high performance low-cost RoboChair which enables the elderly and disabled to gain necessary mobility to live independently and improve their quality of life in the society. This RoboChair should have a user-friendly man-machine interface and the ability of avoiding collision and planning a path. It will be equipped with a new vision system and a wireless communication system so that its carer or relative can monitor and tele-operate it when necessary.
The project is focused on two levers of complexity: One is an intelligent control system to achieve good control stability, fast image processing capability and autonomous navigation. Another is an interactive user interface for voice control, emotion and gesture detection, as well as a 3G mobile phone for carers or relatives to monitor and communicate remotely. The project is jointly funded by the Royal Society and the Chinese Academy of Sciences.
This project is jointed funded by the Royal Society, £17910, and the Chinese Academy of Science, £33,330, namely "Intelligent RoboChair: Improve Quality of Life for the Elderly and Disabled", 1/5/2004 - 30/4/2007
Here is the video to show the head gesture based control of an intelligent wheelchair at Essex.
Brain actuated control is a joint EPSRC project carried out at Essex University and Oxford University. This project aims to develop a novel adaptive and asynchronous brain-computer interface (BCI) system for brain-actuated control of intelligent systems and robots. Recent advances in science and technology have shed light on the possibility of fusing human's brain with intelligent machines to carry out challenging tasks that the state of the art autonomous machines cannot undertake. BCI is one of the key technologies to make this possible. A BCI system detects and analyses brain waves, e.g., electroencephalography (EEG) signals, in order to understand a user's mental states, and then translates the mental states into commands for communicating with and controlling computers, robots, and other systems.
Based on our previous research in BCI and related areas, we believe that it is now very timely to develop adaptive and asynchronous BCI systems that not only have the advantages of using asynchronous protocols, such as high information transfer rate and natural operation mode, but also benefit from adaptive learning so as to improve the system's accuracy and robustness. Apart from adaptive learning, in order to achieve high accuracy and robustness, this proposed programme will investigate novel effective indicators for onset detection and optimal timing schemes for asynchronous mental state classification, discover or invent new feature spaces on which it would be easier to classify EEG patterns, and develop new methods for increasing the number of control commands mapped from a limited number of mental states. The methods developed will be assessed through extensive experimentation with real-time brain-actuated control of an intelligent wheelchair and other devices.
This project is funded by EPSRC, EP/D030552/1, £261,939, 03/01/2006 – 02/01/2009.
Here is the video to show Brain actuated control of a mobile robot.
In this project, our research focuses on the interaction between a service robot and the people within a public environment using speech conversation as a primary communication tool. Due to the nature of human vocal communication, it is also necessary to implement a “body language” reading system – the ability of the service robot to be able to tell the emotion of a person speaking, and any hand gestures that may occur during contact. The vocal communication method requires various technologies, such as Speech Recognition, Natural Language Processing, Conversational Algorithms and Speech Synthesis. As the robot may work in a noisy environment, a set of sound filtering algorithms needs to be developed, along with a pattern recognition system for identifying phonemes in speech. A non-precise pattern recognition system, such as a neural network, can be employed within the system, to produce a speaker independent recognition system.
Studies into sentence comprehension will allow a Natural Language Processing system to be created, allowing the system to identify topics of conversation, to better allow the Conversational Algorithms to create replies inline with the speaker’s questions/conversation/commands. Speech Synthesis can then be used to generate a vocal reply to the internally stored variables in memory, finalising the vocal communication engine. Emotion recognition can be employed by using vision-based techniques such as edge detection, statistical analysis and template matching, with a pre-stored database of rules. Once the various techniques have been executed, the database can be queried to identify the emotion, given the conditions of the facial image. Hand gestures can also be identified using the same techniques, as well as motion analysis. As the system may be used in a public area where communication between different cultures and accents may be common place, the system must be able to adapt and learn from previous experiences. Allowing the system to learn phonemes sounded from people with various cultural backgrounds will help increase the accuracy of the speech recognition system.
· This project is funded by EPSRC CASE and the London Aquarium Limited, £48,000.
Here is the video to show the Atlas guide robot in action at London County Hall.
Being able to understand the environment (usually time-varying and unknown a priori) is an essential prerequisite for intelligent/autonomous systems such as intelligent mobile robots. The environmental information can be acquired through various sensors, but the raw information from sensors are often noisy, imprecise, incomplete, and even superficial. To obtain from raw sensor data an accurate internal representation of the environment, or a digital map with accurate positions, headings, identities of the objects in the environment, is very critical but very difficult in the development of robotic systems. The major challenge is from the uncertainty of the environment and the insufficiency of sensors.
Basically there are two categories of techniques for handling uncertainties: adaptive and robust. Adaptive techniques exploit a posteriori uncertainty information that is “learnt” on-line, whilst robust techniques take advantage of a priori knowledge about the environment and sensors. We are mainly interested in model-based approaches. We are investigating techniques for automatic error detection and error-driven model adaptation or parameter adjustment. We are also developing multisensor data fusion methods and multiple model approaches, including complex task decomposition, individual model design, and intelligent model switching or fusion.
This project was partly funded by the Research Promotion Fund of the University of Essex: "Adaptive and robust methods for handling uncertainties in multi-sensor data", £5,780, 2001-2002
This project is to implement a software package, Calvin, initially designed by Valiant Technology, intended to allow secondary school children to program a limited range of ATMEL processors. Calvin enables school students to tackle general control problems, but has an emphasis on robotics, particularly mobile robots. The project will also involve the development of up to 4 simple project boards that will permit students to incorporate the ATMEL processors into class projects. Some of these may link to Valiant's Tronix products, and others will be standalone designs.
CALVIN: An Interactive Programming Language for Schools, funded by the Valiant Technology Limited, £10,000
This is a joint project with partners from the University of Bath, University of Essex, Sheffield Hallam University, University of Ulster and the Stroke Organisation which is funded from The EPSRC EQUAL (Extend Quality Life) initiative.
This research will examine the appropriateness and effectiveness of technology to support hospital or home-based rehabilitation programmes for older people who have sustained a stroke, the aim being the recovery and improvement of mobility. The system will employ monitoring systems that will provide both therapeutic instruction and support information. This combination of technology and knowledge management will support specific rehabilitation interventions and measure the effectiveness of the resulting actions undertaken by the participant. Information regarding process will be fed back to the person in an appropriate format (audio/ visual) and where appropriate, their carers or health care professionals. The SMART monitoring technology we propose to develop will enhance the need for “hands-on” therapy from a trained therapist, with expert support being provided through video and audio feedback.
The SMART monitoring technology for home-based rehabilitation funded by EPSRC GR/S29089/01, £690,674, in collaboration with Bath University, Sheffield Hallam University and Ulster Unversity at Belfast.
Recent research has shown that brain waves contain useful information about intention or mind. After some training process, distinctive patterns associated with specific intensions can be detected from brain waves, which can be used to generate commands to control computers and robots. One of the interesting applications of this idea is prosthesis control for the disabled. We are developing methods for brain wave pattern detection and analysis, investigating the maximum number of distinctive patterns available from a person's brain waves. We are also building testbeds and systems for general BCI research. This project is partly supported by a research grant from The Royal Society, the Research Promotion Fund and Research Development Fund at University of Essex.
· Developing keyboard and mouse alternatives for hands-free computing, funded by The Royal Society, £3500, 2004-2005.
· Building up brain computer interface research at Essex, funded by both RPF and RDF, University of Essex, £6000, 2003-2004.
The long-term goal of this project is to carry out fundamental research on multi-agent/multi-robot cooperation and learning towards real-world applications such as tele-training, tele-manufacturing, tele-repairing, remote surveillance, fire fighting, and distributed service robots for office, hospital and elderly care. Cooperative Internet robots will be a useful test-bed for us to do this. Also this unique equipment can be shared with other researchers and Internet users who are willing to work in the same area.
This research addresses an investigation on reinforcement learning in multi-robot systems under the framework of stochastic games. The range involves a variety of areas, including multi-robot systems, reinforcement learning, behaviour-based robotics, fuzzy logic classifier, and game theory. The research is to develop Q-learning algorithms for co-ordination games with better understanding of their theoretical and practical aspects. It will provide new insights for multi-robot systems learning in the fundamentals of efficacy of co-learning and emergence of co-operative behaviours. It is also expected to extend the outcomes of handling large learning spaces and uncertainties to the learning in general agent-based systems. Through implementation of the research, it will lay the foundation to further study on this area and to more complex applications. This project "Scaling Reinforcement Learning to Multi-Robot Systems",
This project is funded by EPSRC, GR/S45508/01, £91,545 "Scaling Reinforcement Learning to Multi-Robot Systems" (2003-2005).
A crucial capability for an autonomous mobile robot is the estimation of its position in the real world. It is well known that a mobile robot cannot rely solely on the dead-reckoning method to determine its position because dead-reckoning errors are cumulative. Using external sensors to observe useful features from the environment becomes necessary. There are two kinds of the environment features that can be used in localisation: one is the artificial beacon populated at a known position in a structured environment, and another is the natural beacon that is abstracted from the real world. The use of artificial beacons is to simplify the localisation process by measuring the robot motion relative to pre-placed beacons. The use of natural beacons enables mobile robots to operate in an unstructured environment, but pays a high cost to interpret sensory data and obtain useful features.
This project is to investigate how to integrate multiple simple and inexpensive sensors in the localisation process for a mobile robot to observe both features in an environment. An extended Kalman filter will be used to fuse the data from multiple sensors. As a mobile robot traverses its environment, it should be able to observe both artificial and natural beacons to update its position and the environment map continuously.
This project is funded by Robotic Sciences Ltd. (£8,000) and collaborated with NavTech Electronics Ltd.
The aim of this project is to design a multi-agent architecture for a team of autonomous vehicles or straddle carriers to autonomously load and unload containers in a dockyard. Apart from identifying optimal paths and plans based on a priori data, an important part of the system architecture will be the handling of dynamic scheduling. A decentralised approach will be adopted in the design, which will be compared with the centralised approach being currently adopted in dockyard. Therefore straddle carriers or autonomous vehicles will need to be able to communicate with each other to form cooperative plans, negotiate routes and schedule the use of the loading crane for excessively large or heavy cargo. There will also be a clear need for the system architecture to include a low-level reactive based control layer for real-time obstacle avoidance.
This project "Agent-based vehicle scheduling for dockyard operation" is funded by EPSRC (£27,800) and GCS Ltd. (£13,200) on a CASE Award (01/10/2000 and 30/09/2003).
This project is to provide a review of current and future AI and Robotic technology and how it might be effectively applied to aid in decommissioning nuclear plants that have been used to process or store nuclear materials. The issues have been addressed include:
· Types of functionality available in current robot and AI systems.
· Examples of existing systems used in nuclear industry e.g. SWARMI and in related areas e.g. search and rescue where robotic access occurs in environments that are hazardous to humans.
· Potential areas of application of robotic and AI technology to specific phases in nuclear facility decommissioning, based on descriptions of decommissioning process found in literature/internet etc.
· Potential future directions in robotics and AI and how it might develop and potentially be applied in the future to nuclear facility decommissioning.
The project is named as "Application of AI and Robotics in hazard environments", and funded by Industry, £5,000
This project lies in an interdisciplinary research area involving robotics, artificial intelligence, optimisation, sensors, and embedded computer systems. Funded by the Royal Society and the University RPF awards, we are currently building a firm research platform on which future work on multi-agent systems can be carried out toward many real-world applications. In general, RoboCup (the Robot World Cup) is an international research initiative to foster Robotics and Artificial Intelligence using football as a common task. It shares many characteristics of real football games and makes the competition very challenging. Its ultimate goal is "By mid-21st century, a team of fully autonomous humanoid football robots should win over the World Cup champion team." This is another landmark project aiming to achieve significant advance in science and technology similar to other landmark projects such as the IBM Deep Blue project. Our Essex research team will work closely with other teams together, step by step toward the realisation of our dream. We welcome anyone who is interested in making his/her contribution to join this challenging project, especially the talented students who are willing to do research degrees.
This project has been financially supported by Sony, Royal Society (£7,825, G503/21644), RoboCup Committee (£2,500), RPF (£9,850, DDP940), RPF (£9,800, DDPB40), etc.
Designing robot behaviours is one of fundamental research areas in behaviour-based robotics. This research area involves application of advanced control techniques in behaviour control and application of computational intelligence techniques in behaviour learning. The advanced control techniques, such as model predictive control, adaptive control, or Lyapunov stability control are used for tracking and parking behaviours. The control stability, control constraints, and control performance are the main research objectives. The computational intelligence techniques including fuzzy logic, neural networks, genetic algorithms and reinforcement learning are used for handling uncertainty and automatic acquiring of behaviours. The research concentrates on reinforcement learning or genetic algorithm learning of fuzzy logic controllers.
This project "Delayed Reward Learning of Fuzzy Logic Controllers for Robot Behaviours", funded by RPF, £4,111, (2002-2003)
Most industrial robotic manipulators move by following fixed trajectories. However, in many applications the environments and tasks of robotic manipulators change dynamically, which bring about various challenges such as dynamic target positioning, trajectory planning, inverse kinematics control, and etc.. Without fixed trajectories, the first problem in robotic manipulator control is where to move, and the second problem is how to move. Visual guidance is the most natural method for solving the first problem. However, 3D computer vision is very difficult, especially when there are constraints on camera positions. In the second problem, the main difficulty lies in the inverse kinematics and dynamics control. We are investigating effective computer vision methods for target positioning, analytical and computational models for inverse kinematics and dynamics, and possible methods to solve where to move and how to move problems together in an integrated manner.
1) Dr, Adrian Clarke, Reader
2) Prof. John Q. Gan
3) Dr. Dongbing Gu, Reader
4) Prof. Huosheng Hu, Team Leader
5) Prof. Simon Lucas
6) Dr. John Woods, Reader
1) Ian Dukes (October 2008 - present)
2) George Francis (August 2009 - present)
3) Theodoros Theodovidis (November 2009 - present)
4) John Oyekan (October 2008 – present)
5) Bowen Lv (September 2010 – present)
6) Hossein Fared Ghassem Nia (Sept. 2010 – June 2013)
1) Prof. Minrui FEI, School of Automation, Shanghai University, China (July - September 2005)
2) Prof. Kui YUAN, Institute of Automation, Chinese Academy of Sciences, Beijing, China (Sept. – Dec. 2003; Feb. 2005, 2007-2009)
3) Dr. De XU, Institute of Automation, Chinese Academy of Sciences, Beijing, China (May - October 2004)
4) Dr. Yi ZHANG, School of Automation, Chongqing University of Post & Telecommunications, Chongqing, China (June 2004 - May 2005)
5) Prof. Shumei ZHANG, Shijiazhong Computing College, China (November 2005 - November 2006)
6) Dr Yeffry HANDOKO, Universitas Komputer Indonesia (July – October 2008; September - December 2010)
7) Prof. Zhiwu HUANG, Central South University, China (April 2008-March 2009)
8) Prof. Qingjie ZHAO, Beijing Institute of Technology, China (September 2008 – August 2009)
9) Dr Ignacio González ALONSO, Universidad De Oviedo, Spain (May – July 2010)
10) Prof. Qing CHEN, Hunan University of Technology, China (September 2010-September 2011)
11) Dr Bing GUO, Chongqing University, China (November 2010 – November 2011)
PhD research students:
1) James Cannon (October 2009 – present) (EPSRC funded),
Research topic: EMG based control of robotic hands
2) Hossein Fared Ghassem Nia (October 2010 – present)
Research topic: Improving Machine Perception by Using Quantum Brain Theory in Machine Vision
3) John Oyekan (October 2008 – present) (EPSRC funded),
Research topic: Swarm Intelligence for Air Pollution Monitoring
4) Theodoros Theodovidis
(October 2005 -- present) (Industry funded)
Research topic: Learning algorithms for mobile security robots
5) Ericka Janet Rechy-Ramirez (October 2010 – present)
Research topic: Application of Evolutionary Algorithms to hands-free control of an Intelligent Wheelchair
6) Lai Wei (January 2007 – present) (Self-funded)
Research topic: Integration of EMG and facial features for hands-free control of Intelligent Wheelchair
1) Paul Cardy (October 2004 - December 2006)
2) Dr. Liam Cragg (January 2007 - 2008)
3) George Francis (August 2004 - September 2006)
4) Dr. Dragos Golobovic (October 2003 - September 2005)
5) Dr. Matthew Hunter (July 2003 - July 2005)
6) Rob Knight (October 2003 - Sept. 2005)
7) Dr. Mohammadreza A. Oskoei (Dec. 2008 – Sept. 2010)
8) Prashant Solanki (October 2005 - September 2006)
9) Dr. Huiyu Zhou (March 2004 - July 2006)
10) Dr. Erfu Yang (December 2003 - December 2005)
Former PhD research students:
Bellotto (PhD, October 2004 – September 2008)
Research topic: Visual navigation for museum guide robots
Julian Ryde (PhD,
October 2003 -- August 2007) (EPSRC Studentship)
Research topic: Cooperative 3D Mapping & Localisation of Multiple Mobile Robots
LIU (PhD, April 2003 -- Nov. 2007) (Industry funded)
Research topic: Modelling & Online Optimisation of Robotic Fish Behaviours
JIA (October 2004 – July 2010) (ORS Award & University Studentship)
Research topic: Visual navigation and learning of Intelligent Wheel Chairs
Lazarus (October 2003 – September 2008)(Self-funded)
Research topic: GA-based coevolution of a multi-agent game
A. Oskoei (PhD, October 2005 – April 2008) (Self-funded)
Research topic: EMG-Based Control for Electric Powered Wheelchair
Renati Samperio (PhD, October 2003 –
September 2008) (Mexico Goveronment)
Research topic: Genetic Programming based evolution of multi-agent behaviours
Jiali SHEN (PhD, October 2003 – September 2007) (ORS award
& University Studentship)
Research topic: Visual navigation algorithms in a museum environment
Yaqin TAO (PhD, October 2002 – September 2007 (ORS award &
Research topic: Visual tracking and prediction algorithms for home-based rehabilitation
Antonio Acosta CALDERON (PhD, October 2001 -- Sept. 2005) (Mexican Government
Research topic: Learning by imitation in a team of mobile robots
M CRAGG (PhD, October 2001 -- Sept. 2005) (EPSRC Studentship)
Research topic: A fault tolerant architecture for multiple tele-robots
Dragos GOLUBOVIC (PhD, October 2000 -- Sept. 2004) (University
Research topic: Evolving Walking Behaviours for Sony Legged Robots
HUNTER (PhD, October 1999 - June 2003) (EPSRC Studentship)
Research topic: Building position selection behaviours for simulated soccer agents
LI (PhD, November 1999 -- December 2003) (ORS award & University
Research topic: Visual tracking and prediction algorithms for quadruped walking robots
KOSTIADIS (PhD, October 1998 -- March 2002)(University funded)
Research topic: Learning to cooperate in multi-agent systems
Rosales (PhD, October 2000 -- Sept. 2004) (Mexican Government funded)
Research topic: Modelling and Control in Robotics
Wang (PhD, October 2002 -- 2005) (University funded)
Research topic: Multimedia Database and Multimedia Information Processing
Zhou (PhD, October 2002 -- 2005) (University funded)
Research topic: Complex Process/data Modelling and Interpretation
Jones (PhD, October 2003 -- Sept. 2007) (EPSRC CASE Studentship + Industry)
Research topic: A Fractional FPGA neural network based framework for a facial emotion recognition system
Former MPhil & MSc (by research) students:
Wo TSUI (MPhil, October 1999 -- January 2003) (Self-funded)
Research topic: A Multi-agent Framework for Cooperative Online Robots
DENG (MSc by research, October 1999 -- November 2001)(Self-funded)
Research topic: Embedded Web Servers for Agent-based Intelligent Buildings
Jinting GUO (MSc by research, October 2000 -- October
Research topic: Real-time visual feedback control of multiple football robots
Parkpoom LEKHAVAT (MSc by research, October 2000 -- April 2003)(Self-funded)
Research topic: A Fuzzy Control System for Small-size Football Robots
Lixiang YU (MSc by research, October 1999 -- November
Research topic: A tele-robot training system over the Internet
Quan ZHOU (MSc by research, October 1999 -- November 2001)(Self-funded)
Research topic: A semi-autonomous control system for Web-based robots
Last modified on 12/05/2010.
For further Information, please email: firstname.lastname@example.org