Imagine that a colleague of yours is helping you source the perfect candidate for a new role, says Khyati Sundaram.

They come to you with a shortlist of candidates they like the look of. But when you ask them how they whittled down their list, they can’t tell you.

You can’t confirm precisely which criteria your colleague was scoring candidates against, why they chose these specific individuals, nor on what basis any candidates who didn’t meet the shortlist might have been dismissed.

This lack of information would probably make you feel a little uncomfortable or suspicious, right?

Well, this is in essence what’s happening when AI is helping companies source and shortlist candidates for roles.

What is “black box bias”?

Most off-the-shelf AI models are a “black box”, meaning we cannot see how or why a model is making its decisions.

We may have programmed the model to look for candidates which meet a certain set of criteria for our role; but depending on the data that model was originally trained on, it might also be making decisions based on learned rules we can’t oversee. I call this “black box bias”.

“Black box bias” is bad news for your talent pipeline. Most garden-variety and open-source AI models have been trained on swathes of internet data going back decades. Research has categorically shown that these models perpetuate social bias, meaning their perception of “top talent” has centuries of racial and gender oppression baked in. In other words, it’s not a fair fight.

“Black box bias” in action

In a recent experiment, Bloomberg asked a text-to-image AI tool to generate pictures of people doing different kinds of jobs. The analysis found that images generated for “high-paying” jobs were dominated by subjects with lighter skin tones, while subjects with darker skin tones were more commonly generated by prompts like “fast-food worker” and “social worker.”

Most occupations in the dataset were dominated by men, except for low-paying jobs like housekeeper and cashier. And men with lighter skin tones represented the majority of subjects in every high-paying job, including “politician,” “lawyer,” “judge” and “CEO.” The data speaks for itself.

AI models mirror and amplify the biases we see all around us, with potentially sinister consequences when it comes to helping you source candidates for your next role. When black box AI feeds you a shortlist, you can’t know which talented individuals didn’t make the cut due to arbitrary factors like name, age, address, degree or even skin tone.

Likewise, you can’t be sure the people on your list are the best: they might just have nominal characteristics we’ve historically associated with certain types of jobs.

How do we fix it?

The only way to correct for “black box bias” is to be more discerning about which AI models we choose to use in recruitment.

Off-the-shelf AI models won’t cut it. We need new, ethical AI models cleaned of historical biases and any data which could trigger bias: like our names, gender or where we studied at university.

We also need recruitment AI models to be explainable so a human can stay in the loop. This means doing away with black box AI so HR leaders can see how and why decisions have been made by AI models, and step in to correct for biased selections when necessary.

Finally, we need to shift our focus away from proxies on CVs to demonstrable skills. Skills-based hiring is the best way to source top talent in an empirical and objective way. Our research at Applied shows that skills-based hiring drives a 4x increase in candidate ethnic diversity and a 93 percent retention rate.

In the absence of clear regulation about the use of AI for recruitment in the UK, it’s up to HR leaders to make the right choices. And we know that doing so pays off: diverse teams perform better, are more productive and make more money. Diversity is an important factor for talent attraction, too: 76 percent of employees and job seekers say diversity is important when considering job offers.

But when black box bias isn’t challenged, diversity – and your talent pipeline – is compromised. You know what to do.

__

Khyati Sundaram is the CEO of Applied.

 

 

 

 

Khyati Sundaram is the CEO of Applied: a behavioural science-backed tool which helps companies hire fairly and without bias. Before joining Applied, Khyati co-founded her own company and also worked in investment banking with JP Morgan and RBS. She also holds an MSc in Economics from the London School of Economics, as well as an MBA from the London School of Business.