HRreview 20 Years
This field is for validation purposes and should be left unchanged.
Subscribe for weekday HR news, opinion and advice.
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

Richard Justenhoven: The four main challenges to overcome when using AI in assessment

-

Richard Justenhoven: The four main challenges to overcome when using AI in assessment

The goal of any recruitment process is to identify the right person for the job. The closer you match the individual to the requirements of the role, the more effective that person will be. You certainly don’t need Artificial Intelligence (AI) to achieve this, but AI will help you do it quicker and more efficiently.

The reality is that AI excels at two things: analysing massive amounts of data and conducting ‘narrow’ tasks – the things you might outsource to a shared service centre. We compare AI’s narrow tasks to household chores – you wouldn’t wash dishes in a washing machine or put clothes in a dishwasher. Each perform specific tasks that rely on humans. So a household’s time-saving machines – and an AI’s algorithms – are unique and aren’t made to be swapped.

AI helps make processes easier by providing useful information, at various stages, that will help make a final recruitment decision. It also has three other key benefits – reducing bias, being legally defensible and engagement.

HRreview Logo

Get our essential weekday HR news and updates.

This field is for validation purposes and should be left unchanged.
Keep up with the latest in HR...
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

 

However, with AI in assessment growing at a rapid pace, besides technical challenges, there are four less obvious issues that must be addressed:

Defensibility:

The process of selecting candidates – be it for entry into the organisation or promotion within – must always be legally defensible. It must not be discriminatory nor favour a particular group of candidates based on gender, or race, or any of the other groups outline in equal opportunity legislation. There are also the rights of the individual – and the right to be informed how assessment information is to be used. Under the General Data Protection Regulation (GDPR), you need to make sure candidates know how and why assessment is being used including profiling to make decisions. But regardless of AI, this is good practice anyway.

Bear in mind too, that standardised ‘plug-and-play’ AI systems are available – but they won’t differentiate your employer brand. If your competitors use the same systems, you’ll all be chasing the same talent. Also, these systems utilise ‘deep learning networks’ which learn as they go. This sounds promising but actually it makes it very difficult to explain exactly why candidates were accepted or rejected. These systems therefore lead you to make selection decisions that you can’t defend, which leaves you vulnerable to litigation from disgruntled candidates. Only custom AI systems offer the ability to make transparent and defensible selection decisions.

Time:

Custom AI systems mirror human behaviour and replicate the best practice of your assessors and raters. To achieve this, you have to pre-feed the system with relevant information. It can take up to six months to ‘train’ an AI system to assess candidates in exactly the same way that your assessors and raters would judge them. Managing this lead time will be a major challenge for organisations.

Chief human resources officers (CHROs) should therefore be forming project teams now to look at custom AI models for video interviewing and other recruitment processes. Otherwise you’ll always be six months behind those pioneering companies that have already invested in this technology.

Ethics:

There is an ethical question around how much support you take from an AI system. For example, are you happy for an AI system to reject your candidates? Or would you prefer it to ‘flag up’ unsuitable candidates so you can review and check their details? How to use AI ethically will be a key consideration for many employers.

AI’s role should be restricted to providing additional information and enhancing efficiency. Recruiters should always set the objectives when hiring. AI can then deliver useful information, at various stages of the selection process, that will support a final decision.

Data handling:

AI excels at massive amounts of data. However, when so much data is involved, the results can be misinterpreted or even deliberately abused. Good data handling practices will be essential not just for confidentiality but also for maintaining your organisation’s reputation. AI should be used carefully and honourably to help you predict which candidates will be effective in the role – and engaged by your organisation.

With this in mind, there are four guidelines to help you get AI in assessment right.

Recruiters should set the initial goal and make the final hiring decision. AI is simply there to support and assist the process.

The need for interviews remains a human activity. What impression does a candidate get from being interviewed by an avatar?

Standard AI systems play a limited role; there is a need to develop custom AI systems to differentiate your employer brand – and offer the transparency of decisions.

And remain aware of ethical considerations, not least how much support we take from an AI system.

In summary, using AI wisely means it’s possible to closely predict the people who will be the best in the specific roles available. They’re also likely to be the most engaged – which supports productivity and retention. There is no doubt, AI will be used more and more in assessment, but getting it right is essential for all organisations.

Richard Justenhoven is the product development director within Aon's Assessment Solutions. A leading organisational psychologist, Richard is an acknowledged expert in the design, implementation and evaluation of online assessments and a sought after speaker about such topics.

Latest news

Felicia Williams: Why ‘shadow work’ is quietly breaking your people strategy

Employees are losing seven hours a week to tasks that fall outside their core job description. For HR leaders, that’s the kind of stat that keeps you up at night.

Redundancies rise as 327,000 job losses forecast for 2026

UK job losses are set to rise again as redundancy warnings hit post-pandemic highs, with employers cutting roles amid rising costs and economic pressure.

Rise of ‘sickfluencers’ and AI advice sparks concern over attitudes to work

Online influencers and AI tools are shaping how people approach illness and employment, heaping pressure on employers.

‘Silent killer’ dust linked to 500 construction deaths a year as 600,000 workers face exposure

Hundreds of UK construction workers die each year from silica dust exposure as a new campaign calls for stronger workplace protections.
- Advertisement -

Leaders ‘overestimate’ how much workers use AI

Firms may be misreading workforce readiness for artificial intelligence, as frontline staff report far lower day-to-day adoption than executives expect.

Cost-of-living pressures ‘keep unhappy workers in their jobs’

Many say economic pressures are forcing them to remain in jobs they would otherwise leave, as pay and financial stability dominate career decisions.

Must read

Jane Horan: Meaningful careers matter more than flexible work for women leaders

In 2012, Forbes magazine announced, ‘’Entrepreneurship is the new...

Lord Mark Price: State of the Nation’s Workplace Happiness

Lord mark Price argues that Government must focus on making employees happier in a post-brexit UK.
- Advertisement -

You might also likeRELATED
Recommended to you