Projects
Projects: Applied AI
1. Design at Scale with Conversational Agents
In our quest to develop novel scalable design methods, conversational agents provide an unprecedented opportunity. They can elicit and gather the knowledge needed to understand people, their activities, and contexts, and available supporting technologies. Conversational agents can directly support designers by stimulating design thinking and guiding novice designers through different stages of the design lifecycle. They can also harness relevant knowledge from several user groups, to enhance contextual understanding and inform design decisions. The latter can scale up participatory design methods and is a current research focus.
Conversational agents can lower the barrier for participation and serve as a scalable design probe – offering the ease of interaction that comes with a conversation, and the familiarity that goes with turn-taking exchanges. Designers can therefore reach more users and improve the representativeness of participants in the design process. They can also reduce biases in their contextual understanding of different user groups and corresponding needs.
Projects: Fundamental AI
1. mXn Human-AI Interaction
The computational power of AI systems vastly transcends human capabilities, and yet the impact of AI systems on human behaviour remains largely unexplored. Bridging this crucial gap is essential in order to harness AI in a way that is beneficial and useful in design contexts. It is also important to advance our understanding of how interactions with AI systems can be designed to benefit more people and for the larger social good.
To better understand the dynamics of human interactions with AI systems, recent work has explored both empirically and qualitatively the related landscapes of trust formation and evolution, of interpretability and explainability, and of reliance on AI systems. Relatively little is currently understood about how these findings carry over to circumstances where multiple humans are interacting with potentially multiple AI systems. These can be thought of as ‘Group-AI’ interactions.
2. ANIE: Analogies for Intelligible Explanations
Humans are making decisions supported by machine learning algorithms more and more frequently. Socio-technical systems (using procedures where humans and AI are jointly involved in a decision) are ubiquitous, from financial risk assessment or medical diagnosis through to public employment. Initial hopes were that such a combination would yield better decisions, but it has proved tough to answer the question of how to facilitate appropriate reliance of users on AI systems. This means relying on the system where it is accurate (or more accurate than humans), but not relying on it when the system is inaccurate (or, ideally, whenever it is wrong).
Users in the real world seldom know if their decision is inaccurate, and cannot easily determine when they need to depend on an AI system to inform their decision. Should we rely on AI only when in doubt? How does the stated accuracy of the system inform system reliance? This tension has been shown to result in users relying on AI systems too little or too much.
3. ValuableML: Co-Design of Value Metrics for Machine Learning
For decades, the primary way to develop and assess machine learning (ML) models has been based on accuracy metrics (e.g. precision, recall, F1, AUC). We have largely forgotten that ML models are applied in an organisation or societal context because they provide value to people. This leads to a significant disconnection between the amazing progress of ML research – with corresponding sky-high expectations of professionals in any field – and the limited adoption of ML. We see the need for new value-based metrics for the development and evaluation of ML models. These will cater to the actual needs and desires of users and relevant stakeholders, and will be tailored to the cost structure in specific use cases.
This project aims to introduce proper value metrics and processes. We will create them by answering two fundamental questions: what makes a model 'good'? and what is the value of a model? We use a co-design methodology emphasising the importance of involving stakeholders in the creation of metrics, so they represent the collective interest of all involved.
4. ARCH: Know What Your Machine Doesn't Know
Despite their impressive performance, machine learning systems remain largely unreliable in safety-, trust- and ethically sensitive domains. Recent discussions in several subfields of AI have reached a consensus of knowledge need in machines, but few discussions have touched upon the diagnosis of what knowledge is needed.
This project aims to develop human-in-the-loop methods and tools for the diagnosis of machine unknowns – a critical step towards reliable and trustworthy AI. We consider humans to be essential in understanding the knowns and unknowns of intelligent machines. People can interpret machine behaviour and create knowledge requirements. We also see computational algorithms as vital tools that can assist humans in knowledge reasoning at scale, under uncertainty. Our research is therefore both empirical and theoretical, with primary activities characterized by the design, implementation, and analysis of human studies, computational algorithms and human-in-the-loop systems.
Knowing machine unknowns is essential in any context both for making AI (debugging the machine) and for using AI (deciding when to trust the machine output). We envision that this project will have a tremendous scientific and practical impact, across all areas where AI and machine learning are applied.