Purpose
We aim to discuss the current state, limitations, and future perspectives on the foundations of uncertainty and AI.
This workshop will gather voices from artificial intelligence, decision making, and engineering to discuss handling uncertainty in a principled, but practical manner. Our goal is to promote an exchange of ideas, and to share the perspectives from the EU-funded Epistemic AI project. The workshop features two prominent invited speakers, roundtable discussions, and a poster session.
Audience
Researchers in academia and industry from various fields, for example: Artifical Intelligence, Machine Learning, Computer Vision, Autonomous Driving, Mechanical Engineering.
The primary aim of this workshop is to provide a platform for academic exchange between various fields with an interest in epistemic uncertainty in AI. We moreover welcome interested guests from industry, policy making, or the general public to participate in this conversation by joining the workshop.
Invited Speakers
Assistant Prof. Elena Mocanu
Bio:
Elena Mocanu is an assistant professor at the University of Twente. She received her PhD in machine learning from the Eindhoven University of Technology in 2017. She visited the Technical University of Denmark (2015), University of Texas at Austin (2016), and the University of Alberta (2022), working on machine learning and decision-making through the means of sparse neural networks. As a mathematician passionate about neural networks, her current research focuses on understanding sparse neural networks and how their learning capabilities can be improved.
Talk title:
Sparse training of neural networks
Abstract:
A fundamental task for artificial intelligence is learning. In concert with the increasingly strong results, the resources required to train and deploy the most advanced AI models are prohibitive. I will start by presenting an emerging state-of-the-art possible solution to reduce computational costs, i.e., sparse training. Then we will explore the performance of sparse neural network training for different machine learning paradigms, including supervised, unsupervised, and reinforcement learning. The very recent progress in the field could be used to foster the generalization performance of sparsely trained models over their densely trained counterparts — while at the same time considerably reducing their computational and memory requirements in both training and inference. Through the end of the talk, I will argue that sparse training matters and opens the path towards more resource-aware and environmentally friendly AI models able to quantify the uncertainty even in extremely noisy environments.
Prof. Fabio Cuzzolin
Bio:
Fabio Cuzzolin is the founder and Director of Brookes’ Visual AI Lab (VAIL, projected to comprise 35+ people in 2022), conducting research in artificial intelligence, uncertainty theory, computer vision, machine learning, surgical robotics and autonomous driving. He has served 4 times on the Board of the Belief Functions and Applications Society, and he was Chair or Steering Committee member for the BELIEF 2014 and BELIEF 2018 international conferences on belief functions. He has given tutorials and invited talks at UAI, IJCAI, Harvard, Oxford, Cambridge, Seoul National University and was invited speaker at BFF4, BFF5, CSA 2016. Cuzzolin is currently the Coordinator of the H2020 FET Open project Epistemic AI (E-pi), Scientific Officer for the H2020 project 779813-SARAS (Smart Autonomous Robot Assistant Surgeon) and chief advisor and Steering Committee member of the Oxford Brookes Institute for Ethical AI. He was an Executive Committee member for the joint Huawei - Simon Fraser University research lab in visual computing based in Vancouver, and collaborates with several companies, including Disney Research, Ocado, Oxbotica, Cortexica, Createc.
Talk title:
Tutorial on second-order uncertainty
Abstract:
Probability theory is far from being the only or the most general mathematical theory of uncertainty. A number of arguments point at its inability to describe second-order or epistemic uncertainty. In response, a wide array of theories of uncertainty have been proposed, many of them (but not all) generalisations of classical probability. As we show here, such frameworks can be organised into clusters sharing a common rationale, exhibit complex links, and are characterised by different levels of generality. Our goal is a critical appraisal of the current landscape in uncertainty theory.
Assoc. Prof. Cassio de Campos
Bio:
Cassio de Campos is an associate professor chair of the Uncertainty in Artificial Intelligence group at TU Eindhoven, and scientific director of the Engineering Doctorate program in Data Science. He obtained his degrees (Habil, PhD, MSc, BSc) from the University of Sao Paulo. He is a Senior Member of ACM (2019), elected member of the executive board of Society for Imprecise Probability (2011-2021), and member of the Council of the Association for Uncertainty in AI (2021-). His habilitation and doctorate theses were carried out on the topic of Uncertainty in Artificial Intelligence, in particular related to robust and interpretative machine learning models. He works on foundations of artificial intelligence and statistical machine learning, including probabilistic graphical models, imprecise probability, and computational complexity, having published 150+ scientific outputs in those areas. He served research foundations in Canada, Belgium, UK, USA, Netherlands, Brazil, Austria, France, and has been a senior committee/area chair of major AI/ML conferences for more than a decade, as well as area editor of IJAR and senior associate editor of TOPML. uai.win.tue.nl/cassio-de-campos/
Talk title:
Credal Models for Uncertainty Treatment
Abstract:
There is a current trend on reevaluating artificial intelligence (AI), its advancements and their implications to society. Uncertainty treatment plays a major role in this discussion. This talk will hopefully convince you that we can make AI more reliable and trustworthy by a sound treatment of uncertainty. Uncertainty is often modelled by probabilities, while it has been argued that some broadening of probability theory is required for a more convincing treatment, as one may not always be able to provide a reliable probability for every situation. Credal models generalize probability theory to allow for partial probability specifications and are arguably a good direction to follow when information is scarce, vague, and/or conflicting. We will present and discuss credal approaches from simple examples to sophisticated credal machine learning models and even their reach into both adversarial and causal inferences. The talk argues that we must continue to push AI forward by investing in Cautious AI.
Registration
A registration is required but it is free and open to all. Use the "Register here" button to register.
Poster session
We encourage any participants and especially Ph.D. students to present a poster on related topics.
Acknowledgements
This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 964505 (E-pi).
Contact
Dr. N. Yorke-Smith:
n.yorke-smith@tudelft.nl
Building 28, room 4E.220
Van Mourik Broekmanweg 6
2628 XE Delft
The Netherlands
Moritz A. Zanger:
m.a.zanger@tudelft.nl
Building 28, room 4E.240
Van Mourik Broekmanweg 6
2628 XE Delft
The Netherlands
Agenda
09:15 Opening
09:30 Tutorial on second-order uncertainty
10:30 Coffee break
11:00 Welcome
11:15 Invited talk 1
12:00 Lunch
13:00 Small group discussions
13:45 Plenary reporting
14:15 Coffee break
14:45 Invited talk 2
15:30 Small group discussions
16:15 Plenary reporting
16:45 Closing
17:00 Posters + Drinks
Invited speakers
Elena Mocanu
e.mocanu@utwente.nl
University of Twente
-
Fabio Cuzzolin
fabio.cuzzolin@brookes.ac.uk
Oxford Brookes University
-
Cassio de Campos
c.decampos@tue.nl
Eindhoven University of Technology