Projects
Fundamental AI
Robust machine learning and uncertainty quantification
Principled uncertainty estimation of the predictions of a machine learning model is essential in order to take safe and robust decisions. A Bayesian approach allows one to represent uncertainty as probability. As a result, tools from probability theory can be employed to formally propagate uncertainty through machine learning models. This project focuses on Bayesian deep neural networks, i.e. neural networks with a prior distribution placed over their biases and weights, and develop methods to compute the probability that these models satisfy a given specification expressed in an appropriate logic. Another aim of this project is that of developing scalable approximate Bayesian inference methods to train well-calibrated and robust neural networks that allow one to obtain reliable confidence intervals on the prediction of a deep learning model.
The main novelty of this project is a shift towards Bayesian (deep) models for enabling probabilistic reasoning over the correctness of AI based control systems, while also accounting for the uncertainty in machine learning algorithms.
Hybrid AI for human behavior prediction
To be able to predict the behavior of the humans around them, automated vehicles (AVs) need to have coherent and calibrated models of these humans. Such models can be potentially learned from scratch using the rich data collected on the roads. However, as much as machine-learned models of human behavior are powerful, they are prone to a number of well-known issues (susceptibility to distribution shifts, lack of explainability, need for large training datasets). At the same time, psychology and neuroscience have studied human behavior for many decades, and offer a variety of mathematical models that have potential for improving real-time human behavior prediction by AVs.
This project aims to develop a hybrid behavior prediction framework that integrates cognitively plausible models of human road user behavior with data-driven machine-learned models in a way that brings out the best of both worlds. We address the challenges of combining heterogeneous representations of human intentions, accounting for uncertainty due to individual differences between humans, and integrating hybrid behavioral models into AV motion planning frameworks. The results of this project will contribute to enabling AVs to interact with humans smoothly and responsibly by predicting their behavior in a reliable and interpretable way.
Applied AI
Emergent behavior and responsibility in mixed-traffic interactions
Current research on mixed traffic interactions focuses mainly on the interactions between one human driver and one automated vehicle. However, it remains unclear whether the same computational approaches scale up to more complex interactions involving multiple heterogeneous agents. This is further complicated by potential emergent behaviors and diffusion of responsibility in multi-agent interactions involving artificial agents. This project develops a framework for quantifying responsibility in traffic interactions including multiple human-driven and automated vehicles. Combining traditional agent-based simulations with cognitively plausible models of human behavior, this research investigates attribution and diffusion of responsibility from the complex systems perspective, focusing on the roles, duties, and expectations of multiple agents.
Safe multi-agent navigation for automated vehicles in mixed traffic
Ensuring that automated vehicles can drive and interact safely in urban mixed traffic, i.e., where some of the vehicles are (fully) automated and others are driven by humans, is an important open challenge. This project makes a step towards addressing this challenge by developing vehicle control systems that not only are safe and account for the uncertainties in both the ego vehicle and its environment but are also meaningful and trustworthy for humans. To meet this goal, we develop a probabilistic data-driven framework for agents to safely interact in a multi-agent system and with expected level of performance. This framework relies on the theory of Markov games, interval Markov processes, and reinforcement learning. A key objective is that the actions synthesized by our algorithms affect humans in a desired way: we explore imitation learning with Bayesian deep learning models to learn a probabilistic model of the human that can then be optimized while planning. The Bayesian models will be implemented in the framework, which will act as the basis for the aforementioned control systems.