Illustration by Mia Clement
More than ever before, businesses are automating recruitment with the help of AI-enhanced software. Some AI tools ‘read’ résumés, taking into account the words used, while others ‘watch’ video interviews, analysing the voice and gestures of applicants. The AI is trained on past applications to detect writing, speech, and behaviour patterns, which are mapped onto personality traits. It then scans new applications for the same patterns, and the results are used to predict whether an applicant will succeed in a specific role. The tools have proven extremely popular, as they save time and appear to solve the problem of human biases arising during interviews. A 2021 survey by Ceridian showed that, of 2000 executives worldwide, 42% are already using AI in recruitment, and 46% are planning to do so.
However, it is essential to remember that AI recruitment software is not an enhanced version of its human counterpart. It is not a super-capable HR manager who understands the meaning and purposes behind abstract hiring criteria. The software is simply a pattern-finder, making statistical links between features of past applications and desirable personal qualities, and then looking for such features in new applicants. This kind of analysis may or may not be helpful for recruiters. Psychologists are still debating whether there is a link between behavioural cues and personality traits. The Electronic Privacy Information Center has formally complained about the ‘unprovable and not replicable’ results produced by some recruitment software. At the very least, real cases continue to show that AI-assisted recruitment can lead to dangerous conjecture and the perpetuation of biases, as it is based on the refutable premise that meaningful conclusions can be drawn from superficial behavioural cues. It has also raised ethical concerns about consent and privacy.
One of the most common arguments against using AI in recruitment is that it perpetuates bias. The AI is trained on data, including past responses to applications, meaning it can retain the very human biases that it is intended to eliminate. In 2019, Amazon stopped using an AI tool to scan résumés after finding that it discriminated against women. The software had been trained on applications from the previous decade, most of which had come from men, so it downgraded résumés containing references to “women’s” activities and favoured those including verbs used more commonly by men, such as “executed” and “captured”. Similarly, an experiment by German Public Broadcasting found that video-analysis software can produce different results based on candidates’ appearance, such as their hairstyle and whether they wear glasses. What is more, candidates might be judged on their video background and the brightness of the recording.
Bias can also result from the lack of nuance in machine-learning technology, which can lead to technical errors or excessive rigidity. For example, in a recent experiment, two algorithms which were intended to judge English proficiency based on intonation rated an applicant speaking German as “proficient”. AI also cannot appreciate variations in expression between different people, situations and cultures, leading to superficial categorisations. For example, the AI might label a frown as a marker of aggression, while a human interviewer might more accurately interpret it as a sign of nervousness or deep thought. HireVue, a leading provider of AI recruitment in the US, has acknowledged this problem and discontinued its visual analysis products, focusing instead on speech patterns. It seems likely, however, that similar problems will arise in acoustic analysis.
In terms of bias, then, it seems that AI recruitment tools do not eliminate bias but often simply speed up discriminatory hiring practices. It is unclear whether this problem can be solved without removing the AI analysis altogether. Trying to avoid bias through designing algorithms to disregard features such as appearance and intonation from applications would, paradoxically, deprive the algorithm of much of the material on which it makes its judgments, thus defeating the purpose of the software. In fact, since the AI thrives on those superficial differences which lead to bias, it might be more useful to HR departments as an identifier of biases to avoid.
Another major concern raised by AI hiring tools is the threat to privacy. While AI tools can reveal a lot about a candidate, this doesn’t mean that an employer should have automatic and unrestricted access to such information. This is particularly true since such AI tools are ‘black boxes’, using algorithmic decision-making processes that humans cannot understand. This feature is unavoidable; the AI’s lack of transparency is caused by the very computational complexity which allows it to produce its results. It has therefore been suggested that candidates should be told which software is being used on their application and should have the choice to opt out or select which results are shared with their prospective employer. Such measures would give applicants more control over their applications. On the other hand, they might re-introduce bias since candidates could choose to share only particularly flattering assessment scores, or they may be penalised for withholding results.
If AI recruitment tools are to be ethical, they need to be used in a highly controlled way, and preferably by specialist data scientists who understand which results are important and how they have been derived. But the question remains: are the gains in speed and the judgments of questionable value worth the time and effort needed to make AI tools ethical? Or will companies simply be shifting time and resources rather than saving them? Instead of being used for recruitment itself, a more productive use for this kind of software might be evaluating human hiring practices. Identifying patterns in a firm’s successful past applications – on which AIs are currently being trained – could show whether that firm’s hiring preferences are discriminatory. This would facilitate fairer competition between candidates and help employers find the right person for the job through their human HR managers.