DEVELOPING ADAPTIVE CAPABILITIES FOR AUTONOMOUS AGENTS IN UNCERTAIN ENVIRONMENT

Project Summary:

Future autonomous agents/robots will help humans to achieve many tasks ranging from repetitive trivial ones to significant ones conducted in inaccessible and hostile environments for humans (e.g., disaster responses, search and rescue, military engagements). Such robots are increasingly sophisticated and incorporate powerful autonomous capabilities. They are capable of operating individually or working alongside humans to achieve their goals [A glimpse at our robotic future: 235 start-ups reviewed (World Robotics Service Robots 2013, pp. 191-198)].

For this project, we want to develop novel reasoning algorithms for a robot to have situation awareness capabilities for decision making.  To achieve this, a robot or more generally an autonomous agent, must possess at least three inter-related intrinsic abilities: continuously perceive and fuse uncertain, inconsistent, or erroneous information to recognize situation changes; and dynamic re-plan to respond to its new beliefs about the environment and about other agents.

Weiru Liu

Email: w.liu@qub.ac.uk

Telephone:+44 (0)28 9097 4896

 With

 John McAllister/Weiru Liu  Cross-layer Cyber-Fusion for Adaptive Autonomous Agents in Uncertain Environments

Internet-of-Things (IoT) devices and future autonomous agents, such as robots, combine in large, ad-hoc networks to enable smart environments, the capabilities of which are highly reliant on the ability of the devices to collaboratively sense, communicate and process information. It is critical that this capability may be enabled in a manner which is secure manner, such that, for instance malicious nodes may not infiltrate and disrupt the network performance.

How does an autonomous agent know whether it may trust a peer with which it is communicating? Each device may infer information authenticating peer devices and their communications at multiple layers of their processing and communications stacks, but these may offer competing hypotheses as to the validity of remote devices and their intentions. How can these hypotheses be reasoned and combined? To do so, an autonomous agent (a) must continuously perceive and fuse uncertain, inconsistent, or erroneous information from multiple sources/layers, to both assess who it communicates to and detect underlying intentions; and (b) dynamically revise its beliefs about its peers and communications.

This project addresses monitoring, fusion and decision making by autonomous agents with the purpose of maximizing their robustness towards resisting malicious communications and interactions. Specifically, it will address the development of highly efficient algorithms and embedded realisations of data fusion algorithms for combination of multi-layer hypotheses regarding the validity or otherwise of communications.