Projects
Optimized AI control for autonomous search-and-rescue robots
Responding to disasters via systematic search-and-rescue planning is a major challenge that involves life threatening risks for human rescuers. In recent years, new technologies have been developed to develop robots that help human rescuers in a team. In particular, search-and-rescue robots face uncertainties such as smoke or darkness that affect the performance of on-board visual sensors. Unpredictable human behaviours (other rescuers and victims) can also influence the robot’s decisions. In search-and-rescue missions the actions of the robots should be optimized in order to save more victims in a shorter time. In his PhD project ‘Optimized AI control for autonomous search-and-rescue robots’, Mirko Baglioni is working on robust and stochastic model predictive control approaches for autonomous control of search-and-rescue robots in uncertain situations.
Integrated adaptive and AI-based control for socially assistive robots
In this PhD project, Shanza Zafar is developing control approaches that are capable of efficient and engaging cognitive humanrobot interactions, particularly for applications in healthcare concerning cognitive assistance. The resulting robots should act like human experts, i.e. the robot should adapt its behaviour for various people and according to the variations in their mood, progress, and goals. Shanza Zafar is working on the integration of Markov Decision Processes and AI-based decision-making approaches to develop the autonomous decision-making module of such socially assistive robots.
Mutual trust in human-AI teams
Mutual trust, close-loop communication and shared mental models are considered to be required coordinating mechanisms for effective teamwork in human teams in an organizational context. In teams combining humans and artificial intelligence (human-AI teams), however, it is still a challenge to implement mechanisms that suit the team dynamics. Achieving mutual trust in human-AI teams (i.e. teammates trusting each other) means developing artificial agents that can reason about and promote appropriate mutual trust. These agents should not only be trustworthy, but should also know when to trust a human teammate (to complete a certain task, for example). In this project, we study trust as a tool that enables artificial agents to predict task outcomes in the context of human-AI teamwork. In particular, we want to build human mental models that express human trustworthiness in this context, taking into account factors such as human teammates’ tasks and environmental characteristics.
Explainable and controllable AI in human-AI teams
AI systems are becoming more and more autonomous and intelligent, making them very suitable partners for humans in so called human-AI teams. However, this increase in autonomy and intelligence does not come without a cost. These complex systems are often considered to be black boxes that can execute difficult tasks and make important decisions without clear insights into their underlying mechanisms. In this project, we investigate how to make such systems understandable to their human teammates, for example by transforming decision traces to clear explanations. We also seek to ensure a degree of human control over these autonomous systems, addressing issues such as accountability. The project will explore how to make these systems dynamic by considering both user and context, with the goal of being understandable and controllable.