Is AI Biased? The Hiring Algorithm Debate

Table of Contents

Introduction

Increasingly, companies are using artificial intelligence (AI) recruiting tools to enhance the speed and efficiency of the applicant recruiting process. Especially in large companies, such as Vodafone, KPMG, BASF, or Unilever, the use of AI tools is already well-established to handle large numbers of incoming applications [1, 2]. However, AI’s application to recruitment is the subject of controversy in public and academic discourse, due to the close relation between AI-based decision-making and ethical norms and values. One line of criticism considers it problematic that important decisions affecting people’s lives are outsourced to AI, which is especially problematic if mistakes are made. One of the best-known real-world examples is the case of Amazon in 2018, where a tested AI software systematically discriminated against women in the hiring process [3]. Various researchers, therefore, have warned of the significant risk these tools’ unknown flaws, such as algorithmic bias [4], pose to organizations implementing new forms of AI in their human resources (HR) processes. Similarly, several philosophers [e.g., 5] have condemned the use of AI in recruitment, denying that AI could possess the social and empathetic skills needed in the selection process.

Still, many providers of AI recruiting tools advertise their products by claiming that they reduce bias and increase fairness in recruitment processes. In addition, widely held assumptions about the objectivity of learning algorithms contribute to a rather positive image of AI-aided recruitment among practitioners [e.g., 6, 7]. The contrast between this positive image and the ethical concerns of AI recruitment’s critics calls for a normative assessment, essential for a more nuanced view of the ethical status of AI recruitment.

This paper aims to fill this gap and provide an ethical analysis of AI recruiting to answer the question of whether AI recruiting should be considered (un)ethical from a human rights perspective, and if so, for what reason. We chose this perspective because human rights are internationally accepted as normative criterion for corporate actions and, increasingly, are integrated in soft law for business [8-10]. Human rights are overarching and comprehensive, yet also aim to be sensitive to cultural nuance [11]. Furthermore, as a legal framework, human rights carry significant implications for the moral underpinnings of business [12, 13].

The remainder of the paper is organized as follows: Sect. 2 clarifies the concept of AI recruitment; in Sect. 3, we outline the normative foundation of our approach, which is based on human rights discourse, and explore human rights’ implications for corporations and AI recruiting. In Sect. 4, which is purely analytical, we discuss whether AI inherently conflicts with the key principles: validity, human autonomy, nondiscrimination, privacy, and transparency, which represent the human rights relevant in the AI-based recruitment context. Lastly, we discuss the contingent limitations of the use of AI in hiring. Here, we use existing legal and ethical implications to discern organizations’ responsibility to enforce and realize human rights standards in the context of AI recruiting, before outlining our concluding remarks.

The contributions of our article are threefold. First, we address the need for domain-specific work in the field of AI ethics [14-16]. In examining the ethicality of AI recruiting, we go beyond general AI ethics guidelines that present overarching normative principles [e.g., 15, 17] and study in detail the ethical implications of AI usage in this specific business function. Second, our paper expands the theoretical research in the field of AI recruiting. Though various extant articles have a practical [e.g., 18], technical [e.g., 19], or empirical [e.g., 20, 21] focus, very few articles refer to ethical theories [e.g., 22] in this context (see review article [23]). To the best of our knowledge, our approach is one of the first to normatively assess whether the use of AI in the recruitment context is (un)ethical per se. By analyzing the use of AI in hiring from a human rights perspective, our paper overlaps with the work of Yam and Skorburg [11]. Nevertheless, while these authors evaluate whether various algorithmic impact assessments sufficiently address human rights to close the algorithmic accountability gap, we examine more fundamentally whether AI hiring practices inherently conflict with human rights. Third, our article provides implications for practice. By defining the ethical responsibilities of organizations, we aim to guide organizations on how to deploy AI in the recruiting process and enhance morality in hiring.