The role of artificial intelligence in recruitment: opportunities and limits
Sorting and ranking applications is a time-consuming task for recruiters. AI is increasingly used in the preliminary sorting of applications, making the work of human recruiters easier.
Despite the apparent objectivity of these tools, major challenges and risks remain, particularly with regard to discriminatory bias.
1. AI for recruitment: considerable time savings
An accelerated process
In a context where companies sometimes receive hundreds of applications for a single position, AI offers real relief. It allows CVs to be filtered according to skills, qualifications and experience, while automatically discarding candidates who don't meet the required criteria. This time saving is particularly valuable for recruiters, enabling them to concentrate on higher value-added tasks, such as interviewing or in-depth assessment of candidates.
2. Bias in algorithms: a real risk
The Amazon example
An emblematic example is Amazon, which in 2014 developed an AI program to automate its recruitment process. This program, after three years in use, was discontinued due to gender bias. By training itself on historical recruitment data from the past, where men were in the majority in the technology sector, the algorithm learned to rule out female applicants. Terms like "captain of the women's soccer team" were penalized by the system, while expressions like "executed" or "unhooked", more commonly used in male applications, were favored.
The reproduction of human bias
Biases in algorithms are not limited to gender. They can also concern other characteristics such as level of education, residence or even first name. For example, if a company has historically hired people with first names like Pierre or Julie, the algorithm could identify this pattern and systematically rule out candidates with more unusual first names, such as Aziz or Jamila. This reproduction of historical biases can undermine the diversity and fairness of the recruitment process.
3. Correcting bias: a complex but essential challenge
Use of independent experts
Correcting bias in algorithms requires a detailed understanding of how they work. Employers need to be able to explain why a candidate has been rejected, which is a considerable technical challenge.
To guarantee the impartiality of their recruitment systems, some companies call in independent experts to audit their algorithms. This external verification enables biases to be identified and corrected before they can adversely affect the selection process. However, this approach is not yet widespread.
4. The legislative framework: what protection is there for candidates?
Transparency and information
In Switzerland, as in most European countries, labor law and candidate protection laws apply right from the application process. However, when it comes to AI, legislation is still in the development phase.
At present, there is no legal obligation for companies to inform applicants of the use of AI in the processing of their applications. This opacity poses a problem, as neither the candidate nor sometimes even the recruiter is aware of any possible discrimination. Transparency is a key issue in current legislative discussions, both in Switzerland and at European level. Regulations are expected to frame the use of AI in recruitment, guaranteeing greater transparency and protection for candidates.
Conclusion: Towards ethical AI in recruitment?
Artificial intelligence offers undeniable advantages for speeding up and streamlining the recruitment process. However, the risks of reproducing discriminatory biases are real, and companies need to adopt a cautious and responsible approach to the use of these technologies. As legislation evolves, it is likely that new regulations will impose greater transparency and control over AI, to ensure a fairer and more inclusive recruitment process.
What do you think? Have you ever had to deal with recruitment using artificial intelligence? Share your experiences and thoughts in the comments!