Theoretical research
The theoretical research of the Digital Ethics Centre is divided over three main themes:
1. Design for Values Methods
Digital technologies need to be designed and used responsibly, but how do we go about doing so? We believe that it is crucial to actively design for a range of values. But what are values, how do we identify and specify them, and how do we verify that a piece of technology embodies the relevant values? How do we deal with changing or conflicting values as part of the design process? Research on Conceptual Engineering, Meta-ethics and the Design for Values methodology helps to answer these questions underlying every applied project.
Key publications
-
Aizenberg, E., & Van den Hoven, J. (2020). Designing for human rights in AI. Big Data & Society 7 (2), 1-14.
In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. Artificial intelligence (AI) systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. It is becoming evident that these technological developments are consequential to people’s fundamental human rights. Despite increasing attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context and the critical input of societal stakeholders who are impacted by the technology. On the other hand, calls for more ethically and socially aware AI often fail to provide answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness. Bridging these socio-technical gaps and the deep divide between abstract value language and design requirements is essential to facilitate nuanced, context-dependent design choices that will support moral and social values. In this paper, we bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process.
-
Klenk, M. (2021). How Do Technological Artefacts Embody Moral Values? Philosophy & Technology 34, 525-544.
According to some philosophers of technology, technology embodies moral values in virtue of its functional properties and the intentions of its designers. But this paper shows that such an account makes the values supposedly embedded in technology epistemically opaque and that it does not allow for values to change. Therefore, to overcome these shortcomings, the paper introduces the novel Affordance Account of Value Embedding as a superior alternative. Accordingly, artefacts bear affordances, that is, artefacts make certain actions likelier given the circumstances. Based on an interdisciplinary perspective that invokes recent moral anthropology, I conceptualize affordances as response-dependent properties. That is, they depend on intrinsic as well as extrinsic properties of the artefact. We have reason to value these properties. Therefore, artefacts embody values and are not value-neutral, which has practical implications for the design of new technologies.
-
Klenk, M. (2022). AI Design and Governance. The State of AI Ethics Report 6, Montreal AI Ethics Institute, 150-152.
"Another new addition to this report which builds on our push towards moving from principles to practice is the chapter on AI Design and Governance which has the goal of dissecting the entire ecosystem around AI and the AI lifecycle itself to gain a very deep understanding of the choices and decisions that lead to some of the ethical issues that arise in AI. It constitutes about one-sixth of the report and is definitely something that I would encourage you to read in its entirety to gain some new perspectives on how we can actualize Responsible AI."
-
Van den Hoven, J., Vermaas, P., & van de Poel, I. (2015). Handbook of Ethics, Values, and Technological Design. Springer, Netherlands.
This handbook enumerates every aspect of incorporating moral and societal values into technology design, reflects the fact that the latter has moved on from strict functionality to become sensitive to moral and social values such as sustainability and accountability. Aimed at a broad readership that includes ethicists, policy makers and designers themselves, it proffers a detailed survey of how technological, and institutional, design must now reflect awareness of ethical factors such as sustainability, human well-being, privacy, democracy and justice, inclusivity, trust, accountability, and responsibility (both social and environmental). Edited by a trio of highly experienced academic philosophers with a specialized interest in the ethical dimensions of technology and human creativity, this syncretic handbook collates an array of published material and offers a studied, practical introduction to the field. The volume addresses myriad aspects at the intersection of technology design and ethics, enabling designers to adopt a constructive approach in anticipating, preventing, and resolving societal and ethical issues affecting their work. It covers underlying theory; discrete values such as democracy, human well-being, sustainability and justice; and application domains themselves, which include architecture, bio- and nanotechnology, and military hardware. As the first exhaustive survey of a field whose importance is characterized by almost exponential growth, it represents a compelling addition to a formerly atomized literature.
Theme coordinators
Dr. Herman Veluwenkamp
2. Moral Values
It is crucial to design digital technologies in line with moral values, to understand their societal implications and research the changes in our understanding of moral values due to technologies. How should we construe and realize values such as accountability, autonomy, democracy, fairness and privacy. We carry out philosophical research on different conceptions of these core values: when exactly is an instance of a digital technology fair? How should accountability be distributed when digital technologies are a central part of the decision making process? How does technology change our notion of autonomy? Research on moral values helps to answer questions that are central to designing responsible digital technologies.
Key publications
-
Klenk, M. (2022). (Online) manipulation: sometimes hidden, always careless. Review of Social Economy 80, 2: 85-105.
Ever-increasing numbers of human interactions with intelligent software agents, online and offline, and their increasing ability to influence humans have prompted a surge in attention toward the concept of (online) manipulation. Several scholars have argued that manipulative influence is always hidden. But manipulation is sometimes overt, and when this is acknowledged the distinction between manipulation and other forms of social influence becomes problematic. Therefore, we need a better conceptualisation of manipulation that allows it to be overt and yet clearly distinct from related concepts of social influence. I argue that manipulation is careless influence, show how this account helps to alleviate the shortcomings of the hidden influence view of manipulation, and derive implications for digital ethics.
-
Maas, J. (2022). Machine learning and power relations. AI & SOCIETY, 1-8.
There has been an increased focus within the AI ethics literature on questions of power, reflected in the ideal of accountability supported by many Responsible AI guidelines. While this recent debate points towards the power asymmetry between those who shape AI systems and those affected by them, the literature lacks normative grounding and misses conceptual clarity on how these power dynamics take shape. In this paper, I develop a workable conceptualization of said power dynamics according to Cristiano Castelfranchi’s conceptual framework of power and argue that end-users depend on a system’s developers and users, because end-users rely on these systems to satisfy their goals, constituting a power asymmetry between developers, users and end-users. I ground my analysis in the neo-republican moral wrong of domination, drawing attention to legitimacy concerns of the power-dependence relation following from the current lack of accountability mechanisms. I illustrate my claims on the basis of a risk-prediction machine learning system, and propose institutional (external auditing) and project-specific solutions (increase contestability through design-for-values approaches) to mitigate domination.
-
Marin, L. (2022). Enactive Principles for the Ethics of User Interactions on Social Media: How to Overcome Systematic Misunderstandings Through Shared Meaning-Making. Topoi, 1-13.
This paper proposes three principles for the ethical design of online social environments aiming to minimise the unintended harms caused by users while interacting online, specifically by enhancing the users’ awareness of the moral load of their interactions. Such principles would need to account for the strong mediation of the digital environment and the particular nature of user interactions: disembodied, asynchronous, and ambiguous intent about the target audience. I argue that, by contrast to face to face interactions, additional factors make it more difficult for users to exercise moral sensitivity in an online environment. An ethics for social media user interactions is ultimately an ethics of human relations mediated by a particular environment; hence I look towards an enactive inspired ethics in formulating principles for human interactions online to enhance or at least do not hinder a user’s moral sensitivity. This enactive take on social media ethics supplements classical moral frameworks by asking us to focus on the relations established through the interactions and the environment created by those interactions.
-
Santoni de Sio, F. & Van den Hoven, J. (2018). Meaningful Human Control over Autonomous Systems: A Philosophical Account. Frontiers in Robotics and AI, 5:15.
Debates on lethal autonomous weapon systems have proliferated in the past 5 years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a “responsibility gap” for harms caused by these systems. To address these concerns, the principle of “meaningful human control” has been introduced in the legal–political debate; according to this principle, humans not computers and their algorithms should ultimately remain in control of, and thus morally responsible for, relevant decisions about (lethal) military operations. However, policy-makers and technical designers lack a detailed theory of what “meaningful human control” exactly means. In this paper, we lay the foundation of a philosophical account of meaningful human control, based on the concept of “guidance control” as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of “Responsible Innovation” and “Value-sensitive Design,” our account of meaningful human control is cast in the form of design requirements. We identify two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a “tracking” condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a “tracing” condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation. As we think that meaningful human control can be one of the central notions in ethics of robotics and AI, in the last part of the paper, we start exploring the implications of our account for the design and use of non-military autonomous systems, for instance, self-driving cars.
-
Santoni de Sio, F., & Mecacci, G. (2021). Four Responsibility Gaps with Artificial Intelligence: Why They Matter and How to Address them. Philosophy & Technology 34, 1057-1084.
The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities.
-
Mollen, J. (under review). Moving out of the Human Vivarium: Live-in Laboratories and the Right to Withdraw.
-
Cocking, Dean & Jeroen van den Hoven (2018). Evil Online. Hoboken, New Jersey: Wiley-Blackwell.
We now live in an era defined by the ubiquity of the internet. From our everyday engagement with social media to trolls on forums and the emergence of the dark web, the internet is a space characterized by unreality, isolation, anonymity, objectification, and rampant self-obsession—the perfect breeding ground for new, unprecedented manifestations of evil. Evil Online is the first comprehensive analysis of evil and moral character in relation to our increasingly online lives.
Chapters consider traditional ideas around the phenomenon of evil in moral philosophy and explore how the dawn of the internet has presented unprecedented challenges to older theoretical approaches. Cocking and Van den Hoven propose that a growing sense of moral confusion—moral fog—pushes otherwise ordinary, normal people toward evildoing, and that values basic to moral life such as autonomy, intimacy, trust, and privacy are put at risk by online platforms and new technologies. This new theory of evildoing offers fresh insight into the moral character of the individual, and opens the way for a burgeoning new area of social thought.
A comprehensive analysis of an emerging and disturbing social phenomenon, Evil Online examines the morally troubling aspects of the internet in our society. Written not only for academics in the fields of philosophy, psychology, information science, and social science, Evil Online is accessible and compelling reading for anyone interested in understanding the emergence of evil in our digitally-dominated world.
Theme coordinators
3. Epistemic Values
Digital technologies provide us with large amounts of new information. How should we interact with this wealth of information? When can we rely on these technologies and under which conditions do we acquire knowledge while using them? How can we make them more transparent and explainable? What information do users need to contest decisions based on automated systems? Research on epistemic values looks at the knowledge-related questions that digital technologies give rise to. Our research helps to set standards on the information that is provided to human users by digital technologies, but also tells us what information is needed to responsibly use, evaluate or overrule digital technologies.
Key publications
-
Buijsman, S. (under review). Defining explanation and explanatory depth in XAI.
-
Buijsman, S., & Veluwenkamp, H. (2022). Spotting When Algorithms Are Wrong. Minds & Machines, 1-22.
Users of sociotechnical systems often have no way to independently verify whether the system output which they use to make decisions is correct; they are epistemically dependent on the system. We argue that this leads to problems when the system is wrong, namely to bad decisions and violations of the norm of practical reasoning. To prevent this from occurring we suggest the implementation of defeaters: information that a system is unreliable in a specific case (undercutting defeat) or independent information that the output is wrong (rebutting defeat). Practically, we suggest to design defeaters based on the different ways in which a system might produce erroneous outputs, and analyse this suggestion with a case study of the risk classification algorithm used by the Dutch tax agency.
-
Durán, J. (2021). Dissecting Scientific Explanation in AI (sXAI): A Case for Medicine and Healthcare. Artificial Intelligence 297, 103498.
Explanatory AI (XAI) is on the rise, gaining enormous traction with the computational community, policymakers, and philosophers alike. This article contributes to this debate by first distinguishing scientific XAI (sXAI) from other forms of XAI. It further advances the structure for bona fide sXAI, while remaining neutral regarding preferences for theories of explanations. Three core components are under study, namely, i) the structure for bona fide sXAI, consisting in elucidating the explanans, the explanandum, and the explanatory relation for sXAI: ii) the pragmatics of explanation, which includes a discussion of the role of multi-agents receiving an explanation and the context within which the explanation is given; and iii) a discussion on Meaningful Human Explanation, an umbrella concept for different metrics required for measuring the explanatory power of explanations and the involvement of human agents in sXAI. The kind of AI systems of interest in this article are those utilized in medicine and the healthcare system. The article also critically addresses current philosophical and computational approaches to XAI. Amongst the main objections, it argues that there has been a long-standing interpretation of classifications as explanation, when these should be kept separate.
-
Durán, J. & Jongsma, K. (2021). Who Is Afraid of Black-Box Algorithms? On the Epistemological and Ethical Basis of Trust in Medical AI. Journal of Medical Ethics 47, 329-335
The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that computational processes are indeed methodologically opaque to humans, we argue that the reliability of algorithms provides reasons for trusting the outcomes of medical artificial intelligence (AI). To this end, we explain how computational reliabilism, which does not require transparency and supports the reliability of algorithms, justifies the belief that results of medical AI are to be trusted. We also argue that several ethical concerns remain with black box algorithms, even when the results are trustworthy. Having justified knowledge from reliable indicators is, therefore, necessary but not sufficient for normatively justifying physicians to act. This means that deliberation about the results of reliable algorithms is required to find out what is a desirable action. Thus understood, we argue that such challenges should not dismiss the use of black box algorithms altogether but should inform the way in which these algorithms are designed and implemented. When physicians are trained to acquire the necessary skills and expertise, and collaborate with medical informatics and data scientists, black box algorithms can contribute to improving medical care.
-
Marin, L. (2021). Sharing (Mis) information on Social Networking Sites. An Exploration of the norms for Distributing Content Authored by Others. Ethics and Information Technology, 23(3), 363-372.
This article explores the norms that govern regular users’ acts of sharing content on social networking sites. Many debates on how to counteract misinformation on Social Networking Sites focus on the epistemic norms of testimony, implicitly assuming that the users’ acts of sharing should fall under the same norms as those for posting original content. I challenge this assumption by proposing a non-epistemic interpretation of (mis) information sharing on social networking sites which I construe as infrastructures for forms of life found online. Misinformation sharing belongs more in the realm of rumour spreading and gossiping rather than in the information-giving language games. However, the norms for sharing cannot be fixed in advance, as these emerge at the interaction between the platforms’ explicit rules, local norms established by user practices, and a meta-norm of sociality. This unpredictability does not leave us with a normative void as an important user responsibility still remains, namely that of making the context of the sharing gesture explicit. If users will clarify how their gestures of sharing are meant to be interpreted by others, they will implicitly assume responsibility for possible misunderstandings based on omissions, and the harms of shared misinformation can be diminished.
-
Pozzi, G. and Durán, Juan M. (under review). Informativeness and Epistemic Injustice in Explanatory Medical Machine Learning.
-
Pozzi, G. (under review). Automated Opioid Risk Scores: A Case for Machine Learning-Induced Epistemic Injustice in Healthcare.