An overview of using AI in recruitment and the ethical issues surrounding it.

Introduction

The use of artificial intelligence in recruitment by employers was initially designed to streamline the process, make scanning hundreds of CVs more efficient, and reduce recruiter bias (Harmon, 2022). Having AI scan an applicant’s CV and shortlist before any human has looked at it saves much time for large companies, who often receive hundreds of job applications for positions targeting university graduates. 

Adopting AI aims to remove bias by removing humans from the process who may have an unconscious bias towards applicants, e.g., using AI interviews to reduce bias against women by removing the unconscious bias towards extroversion and confidence in interviews (Chamorro-Premuzic, 2019). 

Yet recently, there has been controversy as to whether AI recruitment tools remove bias from the process or instead create new biases of their own (Chamorro-Premuzic, 2019; Rossi, 2019). 

The media has picked up on this and suggests that using AI tools in recruitment may fall foul of any equality acts or anti-discrimination laws (Milmo, 2022).

Facial bias

One of the most publicised examples of AI in recruitment going wrong was HireVue. HireVue faces is an online video interview platform that collected job applicants’ biometric data without their consent. HireVue unlawfully collected biometric data from online interviews and analysed it as part of an employability score (EPIC, 2022). 

AI interviews can be biased against those from cultures where facial expression meaning does not conform with the meanings assigned in the algorithm

Facial expressions reportedly make up about 29% of a job candidate’s employability score (Shaak, 2022), negatively affecting many individuals, including those with disabilities that impact their ability to express themselves such as autism (‘Job hunting for neurodivergent people: ‘AI recruitment means I’ve got zero chance’ - BBC Three’, 2022). 

Western AI interviewing technology may not consider cultural differences in facial expressions, and recent research suggests that the meaning behind facial expressions is not universal (Gendron et al., 2018). This leads to a risk that any expression analysis in AI interviews will be biased against those from cultures where facial expression meaning does not conform with the meanings assigned in the algorithm. 

Concerns over how HireVue’s system made its decisions would not necessarily be helped by more transparency. HireVue’s systems could be made public but with the complexity and opacity as to how the algorithms decide, it would be unlikely to be of much help to those who cannot understand technical algorithmic-based jargon (Maurer, 2021). 

HireVue have since withdrawn the facial recognition aspects of their software (Maurer, 2021). In the US court case resulting from HireVue’s biometric data collection and analysis, it was ruled that AI in recruitment would still be subjected to anti-discrimination laws that protect certain characteristics such as age, race, gender and sexuality, and therefore employers are encouraged to hire an expert, though the researchers offer no guidance on who this expert may be, to determine whether an AI recruitment system would be best for their company (Wilner & Saba, 2022). 

Since HireVue’s practices, the press’ opinion and articles on AI in recruitment remains negative, exposing the ways data was collected and used without consent and how it may also break equality laws (EPIC, 2022; Maurer, 2021; Shaak, 2022; Wilner & Saba, 2022). HireVue’s very public failings have changed the face of automated recruitment systems and may make employers warier about using them and how they should be used. As people become more aware of the role AI systems play within recruitment, there are more calls to investigate the fairness of the system.

A trial run of the ICO in New Zealand showed international concern regarding bias within AI systems in both recruitment and when using AI to determine financial matters such as eligibility for mortgages.

In response to the growing awareness, the UK data watchdog have launched an investigation into racial bias caused by AI recruitment systems that may cause people of ethnic minorities to miss out on job opportunities (Milmo, 2022). The Information Commissioner’s Office (ICO) is expected to look at the impact these systems have on people who are not included in the testing or development of the software (Milmo, 2022). This follows a trial run of the ICO in New Zealand, showing international concern regarding bias within AI systems in both recruitment and when using AI to determine financial matters such as eligibility for mortgages (Farrell, 2022; Milmo, 2022). 

What information is available to both employees and job candidates on bias within AI recruitment systems?

When searching for information about AI in recruitment, it’s easy to come across information showing that while the systems aim to remove bias in the recruitment process, they can, in fact, be biased by the very people programming the system (Rossi, 2019). In discussions about algorithmic fairness that are now crucial to AI vendors when marketing their product, people with disabilities are often omitted, and the nuanced nature and variability of disability make mathematical algorithmic fairness very hard to achieve (Tilmes, 2022). Fair machine learning methods mitigate biases in ways that flatten any curves or variance in the data. However, with the fluidity of disability fairness measures, such as more diverse datasets, this may not be sufficient in creating a system as equally accessible to disabled individuals as to everyone else (Tilmes, 2022). 

Synthetic data (AI-generated data) could be a way of balancing the datasets, however synthetic data generated to enhance existing datasets may also be biased depending on the algorithms used to create it (‘The future is fake: The rise of synthetic data in training AI models’, 2022). Training AI from lesser but more in-depth ‘deep-thick’ data rather than big-bulk data may yield more resilient AIs against security risks regarding bias. However, the kind of in-depth learning an AI would gain mirrors how humans learn – slowly, and is therefore not currently a feasible method to quickly train AIs for use. 

Additionally, it’s still unclear how well these systems can generalise from smaller datasets (Heaven, 2019). There can be a gender bias in AI recruitment tools, and such systems can even exacerbate the biases they’re designed to remove. For example, Amazon’s highly publicised sexist recruitment system was discarded as it was trained on more male data than female (Huet, 2022; Chamorro-Premuzic, 2019; Rossi, 2019, ‘Amazon scrapped ‘sexist AI’ tool’, 2018). 

To mitigate bias, researchers have suggested that audits of AI recruitment tools may go some way to ensuring they stay free from bias in ways that would break equality and anti-discrimination laws and that companies use them to assist their hiring processes responsibly (Kazim et al., 2021). The researchers do not mention who they think should be responsible for carrying out the audits of AI recruitment tools.

Conclusion

The risks of using AI in recruitment stem from the systems being biased towards minority groups and exacerbating problems for those who already struggle to gain employment (Huet, 2022; “Job hunting for neurodivergent people: ‘AI recruitment means I’ve got zero chance’ - BBC Three”, 2022; Wilner & Saba, 2022). Any system used by employers that discriminates against those with protected characteristics risks falling foul of anti-discrimination legislation such as the Equality Act (2010), and risks court cases if discrimination is found in the hiring process.

Therefore, it is advised that until steps can be taken to efficiently mitigate the bias within AI recruitment, to continue the use of traditional hiring processes where CVs and interviews are seen by humans. 

It is generally good practice to ensure that any new method of recruitment goes through an approved testing phase with appropriate oversight and also a full Equality Impact Assessment consultation process.

Read more
  • Amazon scrapped ‘sexist AI’ tool. BBC News. (2018). Retrieved 25 July 2022, from https://www.bbc.co.uk/news/technology-45809919.
  • Farrell, N. (2022). Brit data watchdog worried that AI is racist. fudzilla.com. Retrieved 14 July 2022, from https://www.fudzilla.com/news/55149-brit-data-watchdog-worried-that-ai-is-racist.
  • Gendron, M., Crivelli, C., & Barrett, L. (2018). Universality Reconsidered: Diversity in Making Meaning of Facial Expressions. Current Directions In Psychological Science, 27(4), 211-219. https://doi.org/10.1177/0963721417746794
  • Harmon, A. (2022). How Companies Can Use AI to Reduce Bias in Hiring. Recruiter.com. Retrieved 14 July 2022, from https://www.recruiter.com/recruiting/how-companies-can-use-ai-to-reduce-bias-in-hiring/.
  • Heaven, D. (2019). Why deep-learning AIs are so easy to fool. Nature.com. Retrieved 7 September 2022, from https://www.nature.com/articles/d41586-019-03013-5.
  • Huet, N. (2022). Gender bias in recruitment: How AI hiring tools are hindering women’s careers. euronews.com. Retrieved 13 July 2022, from https://www.euronews.com/next/2022/03/08/gender-bias-in-recruitment-how-ai-hiring-tools-are-hindering-women-s-careers.
  • In re HireVue. EPIC - Electronic Privacy Information Center. (2022). Retrieved 5 July 2022, from https://epic.org/documents/in-re-hirevue/.
  • Job hunting for neurodivergent people: ‘AI recruitment means I’ve got zero chance’ - BBC Three. BBC Three. (2022). Retrieved 6 July 2022, from https://www.bbc.co.uk/bbcthree/article/c62bcab6-db6f-4026-90bb-7f508705a65b.
  • Kazim, E., Koshiyama, A., Hilliard, A., & Polle, R. (2021). Systematising Audit in Algorithmic Recruitment. Journal Of Intelligence9(3), 46. https://doi.org/10.3390/jintelligence9030046
  • Maurer, R. (2021). HireVue Discontinues Facial Analysis Screening. SHRM. Retrieved 5 July 2022, from https://www.shrm.org/resourcesandtools/hr-topics/talent-acquisition/pages/hirevue-discontinues-facial-analysis-screening.aspx.
  • Milmo, D. (2022). UK data watchdog investigates whether AI systems show racial bias | Artificial intelligence (AI) | The Guardian. Amp.theguardian.com. Retrieved 14 July 2022, from https://amp.theguardian.com/technology/2022/jul/14/uk-data-watchdog-investigates-whether-ai-systems-show-racial-bias.
  • Rossi, F. (2019). Why an AI recruiter can be as biased as the humans that built it - The Times & The Sunday Times. The Times & The Sunday Times. Retrieved 11 July 2022, from https://www.thetimes.co.uk/static/ai-bias-job-hunting-ibm-recruitment-sexism-discrimination/.
  • Shaak, E. (2022). Video Screening Co. HireVue Illegally Collected Illinois Job Applicants’ Facial Scans, Class Action Alleges. Classaction.org. Retrieved 22 June 2022, from https://www.classaction.org/news/video-screening-co-hirevue-illegally-collected-illinois-job-applicants-facial-scans-class-action-alleges.
  • The future is fake: The rise of synthetic data in training AI models. Verdict. (2022). Retrieved 7 September 2022, from https://www.verdict.co.uk/synthetic-data-ai-training/.
  • Tilmes, N. (2022). Disability, fairness, and algorithmic bias in AI recruitment. Ethics And Information Technology24(2). https://doi.org/10.1007/s10676-022-09633-2
  • Wilner, K., & Saba, C. (2022). Class Action Targeting Video Interview Technology Reminds Employers of Testing Risks. Paulhastings.com. Retrieved 5 July 2022, from https://www.paulhastings.com/insights/client-alerts/class-action-targeting-video-interview-technology-reminds-employers-of.