Dr Alex Linley: Why AI in HR needs the human touch

-

There’s currently a lot of discussion on how Artificial Intelligence (AI) might help address the talent shortage. However, technology alone is not enough – AI needs to be implemented with wisdom and human experience if it’s to be effective.

The Red Socks Conundrum – AI Pattern Recognition with a Human Resolution

We know that AI can be used to scan and interrogate data sets to find patterns, correlations and coincidences that a human being would just not have the capacity to.

A hypothetical example we often to use is the finding that the best salespeople wear red socks. We imagine that this finding has come about by an AI being fed all sorts of different information about salespeople, from their sales performance to their clothing choices, eye colour and a host of other ‘facts’.

Get our essential weekday HR news and updates.

This field is for validation purposes and should be left unchanged.
Keep up with the latest in HR...
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

 

The AI churns through all this information and out pops the finding that all the best salespeople wear red socks. Now, in the absence of a human in the loop, the AI could recommend that we should just go and hire anyone wearing red socks and they would be a great salesperson.

But with a human in the loop, we would question why it is that a person who wears red socks is a better salesperson? We might speculate that wearing red socks could be a sign of confidence and being prepared to stand out from the crowd, and of course, it is these characteristics, not the red socks per se, that are the better predictors of being a great salesperson.

As occupational psychologists we could then go and test this theory to be sure that ‘red socks’ were just a proxy for the more important latent variable of confidence and standing out.

In this way, the combination of human experience and AI in HR means that we can make much better decisions and get better outcomes through using AI as an assistant that makes recommendations but ensuring a human is the final arbitrator. Indeed, this philosophy is baked in to GDPR, where a data subject has the right to request a human review of any automated decision-making process.

The Limitations of Data

One of the areas where AI can come unstuck is if there is not enough data to be able to find reliable patterns. To be clear, AI can always find data patterns, but we need those patterns to be reliable if we are going to be able to do anything with them.

In things like political opinion polls, it’s generally accepted that a sample size of 1,004 people is representative of the population. We might consider that it’s the same for AI (although recognise that for the titans of tech like Facebook and Google, AI is more likely to run on billions of data points). However, in most cases where AI is being applied in HR, we won’t have sample sizes that are anything like as big as this, and so we could be faced with the challenges of sampling error – bias and error in our results. In a nutshell, the less confident we are that our sample represents the population, the greater the risk our results will be flawed.

This can lead to another problem that statisticians call ‘restriction of range’. Think of it this way:

If we lined up 100 people in order of their height, we would have the shortest people on the left and the tallest on the right. Then imagine that we took just the five tallest people from this group and focused on height differences between them. It’s possible that there may be some differences; equally, it’s more than likely that these five tallest people are going to be of a fairly similar height to each other – even though they are of a significantly taller than the rest of the other people in the line-up.

This is obvious when we look at it in this way. Yet this is what happens in recruitment validations all the time.

When people attempt to use AI to find the differences between the top and bottom of the cohorts that they have already selected, it’s like trying to find the difference in height between the five tallest people, rather than the difference in height between the five tallest people and the other 95 people in our line-up. There might be differences, but the biggest differences are likely to be between the people we already selected and the rest of the group – but we don’t know that because we don’t have the data on them.

Another area where AI can fall down is when we put the ‘wrong’ data into the model in the first place. By ‘wrong’, this could be data that has systemic errors in it, data that isn’t representative of the population, or data that is already biased in some way.

One example of this that reached the popular consciousness was the Amazon hiring algorithm that ostensibly worked out that male candidates were more likely to be better software engineers and so penalised and rejected female candidates. Of course, this is utterly untrue, but how can we expect the unchecked AI to know that, when it is only as strong as the data that feeds it – a dataset which comprised almost all male software engineers, leaving the AI to conclude that male = software engineer, female = not software engineer’.

The Invaluable Human Experience

AI has no conscience, no morals, and applies no ethical judgement to its decisions. It is simply a set of algorithms running through statistical calculations and correlations to find the pattern that makes the best sense of the data.

And this is why the human touch is invaluable. A human overseer can spot that something has gone awry, ask why and fix it, or can decide to take other appropriate action (which is actually what happened in the Amazon case, and the algorithm was never deployed to live).

When we blend the precision of AI and data analytics with the nuance for ethics, morality, perspective and intuition that human experience brings, that is where magic can really happen. That’s the future of AI in HR that I am most excited about.

Dr Alex Linley is the CEO and Co-Founder of Capp and its international brand Cappfinity. The company is a leading provider of assessment and development solutions, with more than 300 clients worldwide and offices in the U.S, U.K and Australia. A world authority in the field of Positive Psychology, Dr. Linley’s proprietary strengths methodology is the foundation of the company’s offering.
Prior to co-founding Capp Dr. Linley was a distinguished positive psychologist and noted academic. He has authored and edited eight books and published over 150 articles and book chapters. He was Visiting Professor in Psychology at the University of Leicester and Bucks New University and holds a PhD in Psychology from the University of Warwick.
Dr. Linley co-founded Capp with a clear purpose: to bring together human insight, data and technology to ‘strengthen the world’ by empowering companies to make informed talent decisions and enabling individuals to thrive in their careers through strengths-based assessment and development.

Latest news

Personalising the Benefits Experience: Why Employees Need More Than Just Information

This article explores how organisations can move beyond passive, one-size-fits-all communication to deliver relevant, timely, and simplified benefits experiences that reflect employee needs and life stages.

Grant Wyatt: When the love dies – when staying is riskier than quitting

When people fall out of love with their employer, or feel their employer has fallen out of love with them, what follows is rarely a clean exit.

£30bn pension savings window opens for employers ahead of 2029 reforms

UK employers could unlock billions in National Insurance savings by expanding pension salary sacrifice schemes before new limits take effect in 2029.

Expat jobs ‘fail early as costs hit $79,000 per worker’

International assignments are ending early due to family strain, isolation and poor preparation, as rising costs increase pressure on employers.
- Advertisement -

The Great Employer Divide: What the evidence shows about employers that back parents and carers — and those that don’t

Understand the growing divide between organisations that effectively support working parents and carers — and those that don’t. This session shows how to turn employee experience data into a clear business case, linking care-related pressures to performance, retention and workforce stability.

Scott Mills exit puts spotlight on risk of ‘news vacuum’ in high-profile dismissals

Sudden departure of a long-serving BBC presenter raises questions about how employers manage high-profile dismissals and limit speculation.

Must read

Mike Byrne: Upskilling is crucial for business survival: can you afford to cut your L&D budget?

"The pressure is on for businesses of all sizes. As the UK navigates the post-pandemic economy with rising inflation and prepares to endure a macro-economic downturn, many organisations are going into survival mode."

Sara Sabin: The importance of incorporating play into leadership

Playfulness has an important place in the world of work and can lead to better work outcomes, stimulating higher levels of performance, creativity and innovation.
- Advertisement -

You might also likeRELATED
Recommended to you