Designing the sparks for a conversation
There is increasing concerns and debates around the transparency of AI systems – in other words, how much we, as ordinary users, can probe into and understand how these AI systems make decisions that affect our day-to-day lives. “But I would argue that transparency and explainability are not enough,” said Kars Alfrink, a PhD candidate at the Faculty of Industrial Design Engineering, “let's say a bank rejects your loan application: explaining the decision-making process provides more clarity but doesn’t change the outcome - there should be a way for you to appeal that decision.”
The building blocks of responsible design
Kars is an experienced designer, having worked on projects from game design for social good to start-up software design, but it was his experience working for a Singaporean start-up creating an art recommendation system that exposed him to the challenges and opportunities of designing an AI system. “There are tools that allow designers to create software prototypes with good fidelity, and designers and software engineers have learnt to communicate with each other,” Kars reflected, “but working with data scientists on machine learning algorithms and the probabilistic nature of this technology, that’s something we still need to learn.”
His design experience has also cultivated his interest and passion for ethical design, especially ethical design of technologies used in cities. “I think it’s important for designers to get better at working with machine learning.” said Kars, “This connects with ethics and design politics, so to be a responsible designer, you also need a firm grasp of the technology. This is what I want to contribute to through my research.”
To spark a debate
Kars’s PhD research is currently focussed on contestable AI systems, systems that, by design, encourage users and those affected to actively engage with it and question its decisions. To design contestability into smart systems, Kars needs to first understand how we — users and those affected by smart systems — interact with these systems and what we think about them. For this, he joined a team led by the Amsterdam Institute for Advanced Metropolitan Solutions looking at smart electric vehicle charge points on Amsterdam streets. “How fast cars can be charged using these charge points are determined by machine learning algorithms, which take into account factors like the city’s electric grid capacity and the amount of renewable energy available,” Kars explained, “but in the future, we can also expect other factors to be considered, for example, drivers' occupation, or there could be a reservation system that allows some users to charge first if there is a lot of demand.”
The team, consisting of members from the Amsterdam Institute of Advanced Metropolitan Solutions, ElaadNL, the Municipality of Amsterdam, The Incredible Machine and TU Delft, built the prototype for a transparent charging station, which aims to not only make the algorithmic decisions made by the charging station more transparent to users, but also allows users to see a record of the charge speed and to file a complaint when they think it is unfair. Through interviewing the users and others who interacted with that prototype, Kars was able to gain a deeper understanding of how people perceive and interact with smart technologies in the city.
This formed a solid foundation for Kars’s current work to build a design framework for contestable AI systems. “It’s not an A-to-Z through-line when it comes to developing and engineering AI systems,” Kars elaborated, “from what data to collect and what labels to include, to choosing performance metrics to optimise models towards, human decisions were made, and these decisions ultimately impact the system behaviour. I am trying to understand how we can use design to open up these decisions to broader scrutiny, to encourage and facilitate user participation at each of these decision points.” Users and other stakeholders being able to collectively discuss these decisions will boost trust in these smart systems and hopefully help reduce their potential negative impact, usually imparted on under-represented groups in society.
It takes an orchestra to play a symphony
The development of AI algorithms and systems is pioneered by computer scientists and software engineers. However, as the use of such smart systems become more widespread and their users more diverse, their designs become crucial in determining how well they serve their intended purposes and their unwarranted consequences. This is where AI technology development can really benefit from designers’ skills and methodologies. “To design transparent and contestable AI systems, we need people with different expertise to come together and do the challenging work of trying to understand each and integrating their practices,” Kars reflected, “I’ve found myself often being the one who builds bridges and translates between the different disciplines. I think designers tend to have the facilitation skills, to ask the right questions and communicate in simple terms that various audiences can understand.”
What about the roles of end-users and citizens? “When it comes to these large, systemic social issues, I think there’s a tendency to place the responsibility on individuals to change their behaviour while distracting from corporate and government responsibilities.” Kars had seen, for example, local governments having huge powers in changing the city’s attitude towards the development and use of transparency, ethical AI technologies. “In addition, we- researchers and people building these systems- do have a huge responsibility to think beyond the technical challenges and consider the impact of our work.”
This story is part of the Open AI research at TU Delft series - also read the introduction and other stories in this series.