Do You Really Want To Be Judged By A Machine?
In today’s world, where everything is controlled by technology, it is impossible to consider recruitment without mentioning machine learning or Artificial intelligence (AI). It is not necessarily a negative thing because we live in a technologically advanced and exciting time. But don’t you think everything comes with drawbacks?
There are a plethora of AI tools available that control everything from interviewing candidates to hiring them. Many companies use AI interview systems as gatekeepers to assess their applicants before candidates even have a chance to speak with a human. Of course, these tools come with many benefits, from saving time to choosing the top candidates, but there are some potential downsides as well.
I consider myself an AI fan because many technological tools not only help recruitment teams but also improve the candidate experience. There is, however, one thing that deserves our attention. I am talking about AI face-screening interview platforms. These platforms bring many positives, but they also bring many negatives to the hiring process that candidates and hiring managers are not aware of.
The pros of AI interview platforms include
- Saving time (especially in the initial screening process)
- Accelerating the hiring process
- Eliminating the need for candidates to travel
- Potentially improving the quality of hire and candidate experience
All these things, especially hiring process acceleration and improved quality of hire, are heavily dependent on the roles the AI tool aims to fill. And yes, automating the hiring process and using AI tools are helpful in high-volume workplaces, and we will definitely see more of them in the future.
And as Frida Polli says that without automation, many applicants are never even seen, but AI can assess the entire pipeline of candidates. Additionally, it can eliminate unconscious human biases with the right amount of auditing (source: “I Got a Job at an Amazon Warehouse Without Talking to a Single Human”).
But there is a difference between using AI and machine learning and using AI face-screening tools.
Face analysis tools create new issues because they do not analyze only the answers that candidates provide during interviews, but also their behavior, intonation, and speech is fed to an algorithm that assigns certain traits and qualities. Many AI experts warn that algorithms trained on data from previous job applicants may perpetuate existing biases in hiring. (Source: Job Screening Service Halts Facial Analysis of Applicants)
And that was also the reason why in 2018, Amazon reportedly abandoned the use of its own technology for automating the assessment of candidate résumés due to biased results. (Source: Job Screening Service Halts Facial Analysis of Applicants)
Why We Shouldn’t Use AI Face-Screening Tools in Recruitment
Let’s look at some of the top reasons why we shouldn’t use AI in recruitment.
1. Not 100% Reliable
Most AI-based tools are not 100% reliable. These tools are black box solutions, and in most cases, you will not get a chance to analyze or even understand how exactly the algorithms work. These tools are still going through a development process, so they still have many errors, inconsistencies, and bugs.
Moreover, these do not produce enough data. We need a plethora of data to identify patterns. For example, an applicant screening system may reject a candidate because they don’t meet the precise requirements set by the algorithm. As Elisa Harlan and Oliver Schnuk’s study “Objective or Biased” confirms, candidates can easily lose out on jobs because they do or do not wear glasses.
Finally, with the rise of social recruitment, these AI tools also judge the candidates based on their social media profiles, activities, and digital footprints.
2. Bias that emerges from algorithms
AI face-screening tools promise to bring less prejudice and more objectivity to the hiring process. But how far they are from that promise?
Software programs promise to identify candidates’ personality traits from short videos. With the help of AI, they are supposed to make the selection process more objective and faster. However, BR (Bavarian Broadcasting) journalists’ exclusive data analysis shows that AI can be swayed by appearances. This issue could perpetuate stereotypes while potentially costing candidates a job (source: “Objective or Biased”)
The journalists tested highly rated AI software to determine if things like glasses a hair scarves affect the candidate’s chances of getting hired. The differences in the results compared to the candidate not wearing glasses or a hair scarf was huge. The AI rated her as more open, more conscientious, and less neurotic without the glasses and hair scarf, with a difference of nearly 20 points (source: “Objective or Biased”).
AI is expected to decrease the bias in the hiring process, but it actually creates many biases related to age, gender, and nationality. AI recruiting tools analyze certain set patterns and then make decisions based on those patterns. Even though one AI company has implied that an external audit showed that its algorithms had no biases, the audit itself tells a different story (source: “Independent Auditors Are Struggling to Hold AI Companies Accountable”)
While people with disabilities or unusual accents are likely to experience the worst outcomes, anyone with an atypical speaking style or quirky mannerisms should be concerned (source: “Independent Auditors are Struggling to Hold AI Companies Accountable”)
AL has certain challenges when it comes to practical application in the recruitment process. Even AI experts are concerned about these tools.
“It’s a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit, based on their facial movements, their tone of voice, their mannerisms,” said Meredith Whittaker, co-founder of the AI Now Institute, a research center in New York. “It’s pseudoscience. It’s a license to discriminate,” she added. “And the people whose lives and opportunities are literally being shaped by these systems don’t have any chance to weigh in” (source: “A Face-Scanning Algorithm Increasingly Decides Whether You Deserve the Job”).
What the Future Looks Like for These Tools
If you are planning to invest money in these face-scanning tools, you should reconsider. Many countries and organizations are fighting for more AI regulations.
Recent news reveals that Europe is seeking to limit the use of AI in society. The use of facial recognition for surveillance and algorithms that manipulate human behavior will be banned under proposed EU regulations on artificial intelligence. That includes algorithms used by the police and in recruitment (source: BBC).
The latest EU legislation leaked to the public aims to heavily regulate these tools in recruitment. The document regulates the definition of so-called high-risk AI systems. For example, companies will not be able to use AI to recruit or monitor employee performance in the same way they can now. (Source: “A European Approach to Artificial intelligence”)
In the US, these tools are already facing several investigations. For example, the Washington Post reported that “The Electronic Privacy Information Center urged the FTC to investigate HireVue’s business practices, saying its face-scanning technology threatens job candidates’ privacy rights and livelihoods.” We should expect similar actions across the world in the future.
Many states and countries have already started implementing similar legislation:
“Set to take effect January 1, 2020, the state’s Artificial Intelligence Video Interview Act has three primary requirements. First, companies must notify applicants that artificial intelligence will be used to consider applicants’ “fitness” for a position. Those companies must also explain how their AI works and what “general types of characteristics” it considers when evaluating candidates. In addition to requiring applicants’ consent to use AI, the law also includes two provisions meant to protect their privacy: It limits who can view an applicant’s recorded video interview to those “whose expertise or technology is necessary” and requires that companies delete any video that an applicant submits within a month of their request” (source: “Illinois Says You Should Know if AI is Grading Your Online Job Interviews”).
If “Objective or Biased” study is accurate — and we have no reason to believe it is not — we can conclude that candidates interviewed through those apps in the last few years were not only fairly evaluated. We could even say that the companies that used these tools discriminated against them during the interview process.
Their chance of getting the job was influenced by bad lighting, glasses, scarves, skin color, and more. Even people who did not speak English as their native language or who were disabled were likely to get lower scores (source: For some Employment Algorithms, Disability Discrimination by Default”)
AI tools promise to remove the bias from the hiring process, but “Objective or Biased” showed us that they make the hiring process more complicated rather than more transparent. And because these algorithms are not public, we don’t know what they’re looking for.
It is obvious that AI and automation affect many industries and recruitment processes and will continue to do so in the future. However, that does not mean that we should remove people or traditional processes from the equation completely.
We cannot deny the fact that people will always be important in any industry because they have qualities that make them essential. Thus, human recruiters are not going anywhere, even with the introduction of these AI tools.
We accept the fact that technology can be exciting, and in most cases, it is. All we need to do is to ensure that we objectively and wisely weigh the pros, cons, and impacts of every technology — especially if we are implementing tools that could impact humans’ lives.