Activate high contrast
To main content
More humanity thanks to AI
More humanity thanks to AI Jeroen van den Hoven Catholijn Jonker TU Delft is an innovative leader in the field of human- and society-centered AI. This is essential in order to protect European values, such as equality and justice, in a world of rapid AI development. Prof. Jeroen van den Hoven and Prof. Catholijn Jonker explain the contribution that their fundamental research is making. In the case of both artificial intelligence and the human brain, data goes in and data comes out without it being exactly clear what happens internally. Humans have consciousness, morality and an interface (language) to reflect on the output. AI is lacking that. The need to equip AI with that ability is demonstrated by a series of examples that reflect our values. Humans need to be able to protest against results from a smart system. We want to keep control over and provide accountability for armed drones controlled by AI. We also aim to prevent a polarised society developing as a side effect of profit-driven AI technology in social networks. This is why, in 2017, Van den Hoven and Jonker established the Delft Design for Values Institute. TU Delft is now the innovative leader in this field. Human-centered AI Value by design TU Delft is innovating AI based on a system-wide approach, in which engineering and ethics are combined in the design. It is referred to as Value X by design, where X stands for responsibility, safety, privacy or sustainability. Van den Hoven, professor of ethics and technology: “System-wide also involves thinking about the conditions in which AI is allowed to operate; from legislation and regulation to integration in society and human-AI collaboration." Enhanced ability Catholijn Jonker (professor of interactive intelligence) prefers to talk of symbiosis between humans and AI. She is researching this subject thanks to funding from the NWO Hybrid Intelligence Gravitation programme and in collaboration with Vrije Universiteit (VU Amsterdam) and other partners. Her field of work is wide-ranging: from human-machine interaction in robotics to support for decision-making based on motives and preferences (and not merely pursuit of profit or polarised debate). What they all have in common is dialogue between humans and machines. “I look for methods and technologies that enhance our ability to share ideas and learn from that dialogue." Jonker is also deliberately conducting research in that same field in Leiden, at the Institute for Advanced Computer Science. “That makes it easier also to collaborate with disciplines such as psychology and epistemology. That’s a precondition for hybrid intelligence.” Negotiations Jonker's ultimate aim is to develop smart systems that encourage people, organisations and even AI to deliberate and continue to reflect. “If AI provides information about motives and standpoints, whether they are yours or other people’s, the result is deliberation. This creates room for the other person and, with it, space for new ideas and solutions that benefit everyone. That will prove useful in financial negotiations as well as debates on social media and in politics." One potential application that Jonker would like to see involves providing humans with insight and an overview in online debates, through human-AI collaboration. For example, in the lead-up to elections. “Who proved influential, why did that one argument go viral and which views went unheard? If you can use that to run a simulation, you can find out how to provide space for nuanced views and everyone who wants to be can be heard.” Thorn The specific focus on reflection about the moral aspects of AI that TU Delft is calling for may appear logical. “But it's a tricky subject", emphasises Van den Hoven. Non-functional design requirements, such as moral values, need to be specific and able to be validated. Even the collaboration needed between engineering, social sciences and the humanities calls for perseverance. Van den Hoven recently joined forces with colleagues to write an academic article about AI support for human rights. “We describe how you could safeguard human dignity within smart systems." The article serves as input for the rollout of European legislation. TU Delft is helping the EU to set the bar high. “It’s a thorn in the side for countries for whom ethics are not top-of-mind. But they need to take it on board; otherwise, they will lose the European market.” Serious investments The Netherlands will need to make serious investments at various levels and in a wider European context. According to Van den Hoven, the reward will come in the form of innovations that are needed worldwide. “Quite simply because people have the same needs worldwide: humanity, safety, freedom, sustainability." Back to overview
The key to AI innovation: human-AI interaction
The key to AI innovation: human-AI interaction Claudia Hauff Frans Oliehoek TU Delft does a lot of fundamental research in the field of AI. Key themes include improving smart systems for information retrieval and decision-making in complex environments. Researchers Claudia Hauff and Frans Oliehoek talk about the challenges in their field. Dr Claudia Hauff, associate professor, is leader of the Lambda-Lab in Delft. One of its key areas of focus is information retrieval: research is done into strategies for optimising the collection of information. For example, Hauff is attempting to improve search results by means of interaction between the user and a smart search system. Hauff: “Think Siri or Google Voice, but better. Systems that learn from consecutive searches what the user really needs and do better the next time.” Machine Learning AI Training data The smart systems work based on machine learning: algorithms that can learn from training data (see box). Hauff mainly uses one type in particular: deep learning. But finding enough data can be a challenge, Hauff explains. “Data from online search engines can teach us how people search and we can use that to train smart systems. But it’s not public data.” For this reason, Hauff works with a different source of data, closer to home. TU Delft offers more than 80 accessible online courses with over two million participants. Hauff makes interventions. Hauff: “For example, we test how behaviour changes when we put questions to participants after a lesson, or if we show them other participants’ results. That provides input for our deep-learning systems.” Traffic simulations Interactions are also key to associate professor Dr Frans Oliehoek’s research, but in this case between AI and humans and between smart systems themselves. Oliehoek is in charge of the INFLUENCE project, that explores interactive learning and decision-making in situations with uncertainties. Examples include traffic simulations involving several smart systems. Oliehoek is also part of ELLIS Delft (see box). “Hauff and I have a shared goal: to support human-system interaction.” In simulations, existing smart systems can already enable a self-driving car to make a decision at a junction. But the algorithms that Oliehoek is developing are aimed at a larger scale. “This is not about data from a junction, but all the traffic data in a city. Thousands of variables. That’s what we aim to be able to manage.” Oliehoek is using reinforcement learning (see box) to teach smart systems to make a series of decisions, enabling them to think in more abstract terms. Oliehoek: “As a result, self-driving cars can deal with uncertainties, such as the effect of rain on road holding. Or anticipate other smart systems. The point of the simulations is to test the fundamental principles.” Balancing Upscaling is the main challenge. More specifically: teaching systems to achieve a balance between the use of existing knowledge (e.g. following the rule “stay in lane”) and exploring for new knowledge (trying out something new and learning from it). In complex simulations, a system also has to deal with the expectations of other systems. Oliehoek: “We’re developing different strategies to teach systems how to deal with considerations such as long-term versus short-term results.” Fundamental research of this kind provides breakthroughs and proofs of concepts fir learning AI systems in complex interactive systems such as robots and self-driving cars. In this respect, Delft is way ahead of, for example, tech companies. Oliehoek: “Many tech companies also do AI research, but only a few focus on human-AI interaction. When they do it, it’s for computer games such as Go. Delft is strong in thinking about how AI systems can be used socially and ethically.” Back to overview