Assumptions of AI hiring and their impact on job seekers

Nieuws - 20 mei 2024 - Webredactie

Artificial intelligence (AI) has become increasingly common in job recruitment, offering automated online interviews and skill-assessment games. These tools promise efficiency and bias reduction but raise concerns about their validity, reliability and ethical implications. Research by Evgeni Aizenberg, Matthew J. Dennis and Jeroen van den Hoven sheds light on the assumptions underpinning AI hiring assessments.

A study by Evgeni Aizenberg (University of Twente & formerly TU Delft), Matthew J. Dennis (TU Eindhoven), and Jeroen van den Hoven (TU Delft) has examined the assumptions underlying AI hiring assessments and their significant impact on job seekers. The researchers highlight a crucial aspect often overlooked in the discourse: job seekers' autonomy over self-representation.

Watch the animation video about this research:

Assumptions of AI hiring and their impact on job seekers | TU Delft

 

AI hiring assessments rely on the assumption that a person’s skill has a meaning that is stable over time and different contexts. Furthermore, they assume that this skill can be measured and expressed as a number. The research challenges the assumption that skills are static and measurable, arguing that many skills are dynamic and context-dependent.

For example, the way a person expresses teamwork or creativity varies across different situations. Meanings of such skills are a product of a person’s interaction with other people in a specific work setting and socio-cultural context. To assess such skills, the assessment process needs to offer space for the job seeker and the employer to explore contextual meanings, for example through conversation. 

Therefore, in many contexts algorithmic assessments' reductionist approach falls short in capturing individual candidates' subtle complexities. AI algorithms aim to streamline the recruitment process by standardizing evaluation criteria, but they inadvertently strip away the candidates' ability to narrate their own stories—their identity, experiences, and aspirations. This erodes autonomy and undermines the dignity of job seekers. 

This research has also important implications for other domains where AI algorithms are applied to profile people, for example policing and welfare. When assumptions behind profiling algorithms are incompatible with contextual meanings of personal attributes they attempt to measure, they undermine people’s autonomy over self-representation. The researchers argue that this issue deserves as much attention within AI ethics as the more prominent topics of non-discrimination (fairness), explainability, and privacy.

The study suggests bringing together job seekers, hiring managers, and other professionals and researchers to collectively reflect on assessment processes suited for specific job contexts and cautions against presuming AI's necessity in assessments. Prioritizing job seekers' autonomy and dignity can create a more inclusive and empathetic hiring environment, where technologies serve as tools rather than judges of human potential.