What does a voice assistant do with our behaviour?
Olya Kudina is an assistant professor at TU Delft and conducts research into the ethical implications of AI. She explains what a speech assistant does with our behaviour.
What does a voice assistant do with our behaviour?
Voice assistants (e.g. Apple’s Siri or Amazon’s Alexa) allow people to interact with technologies simply by speaking instead of typing or swiping. For instance, people use them to set an alarm, inquire about the weather or order food delivery. But beyond the new opportunities, voice assistants also change the way we interact with each other. For one, the users worry that the companies behind these devices record their private conversations, which causes people to find creative ways to use voice assistants or to reject their use altogether in order to secure some level of privacy. On another note, voice assistants cannot properly process human speech with its ambiguity, jokes, sarcasm, etc. and instead invite the users to speak in clear short sentences, command-like. This raises a worry that DVAs flatten human interaction by promoting simpler ways of speaking, thus hampering the values of communication and mutual understanding. Additionally, people are concerned with the gender stereotyping that voice assistants promote. These technologies predominantly speak in female voices, while being presented as digital servants that can never refuse the commands of their users. With this, DVAs challenge the values of diversity and inclusion that are essential in democratic societies. Thus, while the use of voice assistants grants people new opportunities (e.g. an increased inclusion in the digital society), it in parallel creates new ethical risks. Mitigating them requires value-sensitive design practices from the engineers, clear implementation guidelines from the policy-makers and a critical adoption from the users.