Applying for jobs in the digital age: Why companies are now using AI to judge candidates

An American-based company is one of many who are changing the way businesses hire new workers

Facial recognition, businesswoman with grid over her face
Powered by automated translation

Nearly all of us have been guilty of making assumptions about people based on the way they look. The idea that someone's character somehow corresponds with their facial features and expressions is one that we find ourselves returning to again and again – when we're at work, when we're dating, even when we're voting for politicians.

But are those hunches of ours ever correct? And if there is a positive correlation, could ­artificial intelligence (AI) ­analyse videos and images of our faces to produce assessments of our emotions; our honesty, or even how hard-working we might be?

HireVue, an American firm, believes that AI analysis of footage of job candidates has an important role to play in determining their suitability. A video interview, or "pre-hire assessment", is processed by an algorithm which, according to the firm, "augments human decision-making in the hiring process and delivers higher quality talent, faster".

Or, if you prefer, a computer whittles down the list of candidates, and if it doesn’t think you’re up to it, you won’t appear on the shortlist. Firms such as Vodafone, Unilever and Hilton Hotels are among hundreds who use the system. Advocates say that it’s efficient and delivers brilliant results; opponents say that it’s inherently biased, rooted in dubious science and unaccountable for the decisions it makes.

Judging people by their appearance isn't a new concept

Attempts to link personality with appearance by using scientific theory is as old as the hills. In Ancient Greece, Aristotle originated the concept of “Physiognomica”, claiming that it’s possible “to infer character from features”.

Similar theories were used and taught across Europe and the Islamic world for centuries, and while they eventually came to be widely discredited, AI has brought about something of a resurgence. In 2016, researchers at Shanghai’s Jiao Tong University claimed to have invented a method of using machine learning to infer criminality from facial images – or, in other words, they believed that they’d established a relationship between looking like a criminal and being one. It was widely criticised.

People are rejected all the time based on how they look. Algorithms eliminate most of that in a way that hasn't been possible before

Alexander Todorov, professor of psychology at Princeton University, said of people claiming the existence of a relationship between faces and character, that they "have not given much thought to their underlying assumptions". We tend to generalise. And when we generalise, we often get it wrong.

The Shanghai study was controversial because it appeared to veer into genetics. HireVue’s interests, however, are behavioural. Its system analyses the smallest details of a prospective candidate’s interview tape and compares them to a database of 25,000 characteristics, from facial to linguistic. The speed they talk, their tone of voice, furrowed brow or nervous blink could all feed into their score. But this metric being used to assess an employee’s worth has also caused disquiet, not least from people who have been rejected by the system.

Loren Larsen, the chief technology officer at HireVue, addressed this concern in an interview with the Washington Post by comparing it to a ­traditional job interview: "People are rejected all the time based on how they look," he said. "Algorithms eliminate most of that in a way that hasn't been possible before." He went on to refer to the mysterious nature of human ­decision-making as "the ultimate black box".

Could a machine be less biased against less attractive candidates?

Many would agree that human recruiters can be prejudiced and liable to make biased choices. For starters, there is a well-established and long-evolved bias against “ugliness”, which assumes that ­attractive, personable people are simply better at everything. The question is whether a machine could be less biased, given that the data it learns from comes from a flawed and prejudiced society. Would it not merely reflect the biases of the system it’s replacing?

Oxford University’s Ivan Manokha believes so. “[If] AI is fed data of the candidates who were successful in the past, [then] companies are likely to hire the same types of people that they have always hired,” he writes. He also expresses concern that algorithms may “contribute to the … amplification of existing beliefs and biases. The solutions it provides are necessarily conservative, leaving little room for innovation and social progress.”

Thus far, HireVue hasn’t allowed its system to be independently audited, and as such there’s no real understanding of the assessments that are being made and why. The mystery of what makes a model ­HireVue candidate makes preparation difficult, and declining to take the test may prevent you from being shortlisted at all. Some US lawmakers are now attempting to force companies to reveal the criteria by which AI may be filtering job applicants, partly to help people understand how they’re being evaluated, but also to rule out the possibility that prejudices are being hidden within another “black box”, but one that cannot answer to criticism.

HireVue isn't the only one using AI in this way; Amazon, and plenty more, are too

HireVue is by no means the only firm using these kinds of systems, or indeed finding new applications for them. Amazon offers a service called Rekognition, which claims to assess facial emotion across eight categories: happy, sad, angry, surprised, disgusted, calm, confused and fearful. Across the “emotion detection” industry, new metrics are being devised to produce a wealth of data points.

One firm, Faception, claims to use machine learning and image data to place people in categories such as “High IQ”, “Academic Researcher”, “Terrorist” or “Paedophile”. Here, there are faint echoes of the work of 19th-century academic Cesare Lombroso, who, after conducting autopsies, stated what he believed to be common physical characteristics of criminals: “Unusually short or tall height … wrinkles on forehead and face … beaked or flat nose … strong jaw line … weak chin.”

Lombroso’s links were weak and contradictory, but some academics are now criticising AI for making similarly weak links – particularly between facial expression and emotion. Our expressions can mean different things in different cultures and different contexts, they say; we’re also adept at hiding our feelings, or indeed exhibiting ones we didn’t intend.

While human interviewers may sense these awkwardnesses and make allowances for them, there is a fear that a computer cannot. As Manhoka says: “Technology may lead to the rejection of talented and innovative people who simply do not fit the profile of those who smile at the right moment, or have the required tone of voice.” Human talent can come in many unconventional forms. The challenge for machines is to appreciate and understand the mavericks among us.