Epsilon-Lab
Home | Members | Projects | Publications
Epsilon News
19-04-2021
Learning to design robust natural language generation models for explainable AI.
On the 19th of April, scientists from different countries will meet with big tech companies such as Google and Orange in Nancy (France). This combination of industry practitioners, senior researchers, and all the current members of the European research project NL4XAI (Natural Language Technologies for Explainable Artificial Intelligence) will combine their knowledge in using natural language to generate explanations for decisions made by an AI system, which are understandable to non-expert users.
16-03-2022
Best Paper Awards at CHIIR 2022.
A recent paper featuring several Epsilon members, "Comprehensive Viewpoint Representations for a Deeper Understanding of User Interactions With Debated Topics" (Alisa Rieger, Tim Draws, Mariet Theune, Nava Tintarev; Hypertext 2021) has won the best paper award at CHIIR 2022!
16-11-2021
Best Paper Awards at Hypertext and HCOMP 2021.
Two recent Epsilon collaborations, "This Item Might Reinforce Your Opinion: Obfuscation and Labeling of Search Results to Mitigate Confirmation Bias" (Alisa Rieger, Tim Draws, Mariet Theune, Nava Tintarev; Hypertext 2021) and "A Checklist to Combat Cognitive Biases in Crowdsourcing" (Tim Draws, Alisa Rieger, Oana Inel, Ujwal Gadiraju, Nava Tintarev; HCOMP) have won Best Paper Awards at their respective conferences!
10-03-2021
Paper accepted at Persuasive 2021: “Disparate Impact Diminishes Consumer Trust Even for Advantaged Users” by Tim Draws, Zoltán Szlávik, Benjamin Timmermans, Nava Tintarev, Kush R. Varshney, Michael Hind.
Systems aiming to aid consumers in their decision-making (e.g., by implementing persuasive techniques) are more likely to be effective when consumers trust them. However, recent research has demonstrated that the machine learning algorithms that often underlie such technology can act unfairly towards specific groups (e.g., by making more favorable predictions for men than for women). An undesired disparate impact resulting from this kind of algorithmic unfairness could diminish consumer trust and thereby undermine the purpose of the system. We studied this effect by conducting a between-subjects user study investigating how (gender-related) disparate impact affected consumer trust in an app designed to improve consumers' financial decision-making. Our results show that disparate impact decreased consumers' trust in the system and made them less likely to use it. Moreover, we find that trust was affected to the same degree across consumer groups (i.e., advantaged and disadvantaged users) despite both of these consumer groups recognizing their respective levels of personal benefit. Our findings highlight the importance of fairness in consumer-oriented artificial intelligence systems.
10-03-2021
Paper accepted at FAccT 2021: “Operationalizing Framing to Support Multiperspective Recommendations of Opinion Pieces” by Mats Mulder, Oana Inel, Jasper Oosterman and Nava Tintarev.
Diversity in personalized news recommender systems is often defined as dissimilarity, and operationalized based on topic diversity (e.g., corona versus farmers strike). Diversity in news media, however, is understood as multiperspectivity (e.g., different opinions on corona measures), and arguably a key responsibility of the press in a democratic society. While viewpoint diversity is often considered synonymous with source diversity in communication science domain, in this paper, we take a computational view. We operationalize the notion of framing, adopted from communication science. We apply this notion to a re-ranking of topic-relevant recommended lists, to form the basis of a novel viewpoint diversification method. Our offline evaluation indicates that the proposed method is capable of enhancing the viewpoint diversity of recommendation lists according to a diversity metric from literature. In an online study, on the Blendle platform, a Dutch news aggregator, with more than 2000 users, we found that users are willing to consume viewpoint diverse news recommendations. We also found that presentation characteristics significantly influence the reading behaviour of diverse recommendations. These results suggest that future research on presentation aspects of recommendations can be just as important as novel viewpoint diversification methods to truly achieve multiperspectivity in online news environments.
---
About Epsilon
The Epsilon group was founded in 2018 and contributes to shaping the field of human-computer interaction in web information systems for decision support such as recommender systems, specifically on automatically generated explanations and explanation interfaces. Recommender systems analyze previous consumption habits to suggest people what to consume next; they propose and evaluate options while involving their human users in the decision-making process.
As such algorithmic decision-making becomes prevalent across many sectors it is important to help users understand why certain decisions are being proposed. Explanations are needed when there is a large knowledge gap between human and systems, or when joint understanding is only implicit. This type of joint understanding is becoming increasingly important for example when news providers, and social media systems such as Twitter and Facebook, filter and rank the information that people see.
To link the mental models of both systems and people our work develops ways to supply users with a level of transparency and control that is meaningful and useful to them. We develop methods for generating and interpreting rich meta-data that helps bridge the gap between computational and human reasoning (e.g., for understanding subjective concepts such as diversity and credibility). We also develop a theoretical framework for generating better explanations (as both text and interactive explanation interfaces), which adapts to a user and their context. To better understand the conditions for explanation effectiveness, we look at when to explain (e.g., surprising content, lean in/lean out, risk, complexity); and what to adapt to (e.g., group dynamics, personal characteristics of a user).