Richard Justenhoven: The four main challenges to overcome when using AI in assessment

-

Richard Justenhoven: The four main challenges to overcome when using AI in assessment

The goal of any recruitment process is to identify the right person for the job. The closer you match the individual to the requirements of the role, the more effective that person will be. You certainly don’t need Artificial Intelligence (AI) to achieve this, but AI will help you do it quicker and more efficiently.

The reality is that AI excels at two things: analysing massive amounts of data and conducting ‘narrow’ tasks – the things you might outsource to a shared service centre. We compare AI’s narrow tasks to household chores – you wouldn’t wash dishes in a washing machine or put clothes in a dishwasher. Each perform specific tasks that rely on humans. So a household’s time-saving machines – and an AI’s algorithms – are unique and aren’t made to be swapped.

AI helps make processes easier by providing useful information, at various stages, that will help make a final recruitment decision. It also has three other key benefits – reducing bias, being legally defensible and engagement.

HRreview Logo

Get our essential weekday HR news and updates.

This field is for validation purposes and should be left unchanged.
Keep up with the latest in HR...
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

 

However, with AI in assessment growing at a rapid pace, besides technical challenges, there are four less obvious issues that must be addressed:

Defensibility:

The process of selecting candidates – be it for entry into the organisation or promotion within – must always be legally defensible. It must not be discriminatory nor favour a particular group of candidates based on gender, or race, or any of the other groups outline in equal opportunity legislation. There are also the rights of the individual – and the right to be informed how assessment information is to be used. Under the General Data Protection Regulation (GDPR), you need to make sure candidates know how and why assessment is being used including profiling to make decisions. But regardless of AI, this is good practice anyway.

Bear in mind too, that standardised ‘plug-and-play’ AI systems are available – but they won’t differentiate your employer brand. If your competitors use the same systems, you’ll all be chasing the same talent. Also, these systems utilise ‘deep learning networks’ which learn as they go. This sounds promising but actually it makes it very difficult to explain exactly why candidates were accepted or rejected. These systems therefore lead you to make selection decisions that you can’t defend, which leaves you vulnerable to litigation from disgruntled candidates. Only custom AI systems offer the ability to make transparent and defensible selection decisions.

Time:

Custom AI systems mirror human behaviour and replicate the best practice of your assessors and raters. To achieve this, you have to pre-feed the system with relevant information. It can take up to six months to ‘train’ an AI system to assess candidates in exactly the same way that your assessors and raters would judge them. Managing this lead time will be a major challenge for organisations.

Chief human resources officers (CHROs) should therefore be forming project teams now to look at custom AI models for video interviewing and other recruitment processes. Otherwise you’ll always be six months behind those pioneering companies that have already invested in this technology.

Ethics:

There is an ethical question around how much support you take from an AI system. For example, are you happy for an AI system to reject your candidates? Or would you prefer it to ‘flag up’ unsuitable candidates so you can review and check their details? How to use AI ethically will be a key consideration for many employers.

AI’s role should be restricted to providing additional information and enhancing efficiency. Recruiters should always set the objectives when hiring. AI can then deliver useful information, at various stages of the selection process, that will support a final decision.

Data handling:

AI excels at massive amounts of data. However, when so much data is involved, the results can be misinterpreted or even deliberately abused. Good data handling practices will be essential not just for confidentiality but also for maintaining your organisation’s reputation. AI should be used carefully and honourably to help you predict which candidates will be effective in the role – and engaged by your organisation.

With this in mind, there are four guidelines to help you get AI in assessment right.

Recruiters should set the initial goal and make the final hiring decision. AI is simply there to support and assist the process.

The need for interviews remains a human activity. What impression does a candidate get from being interviewed by an avatar?

Standard AI systems play a limited role; there is a need to develop custom AI systems to differentiate your employer brand – and offer the transparency of decisions.

And remain aware of ethical considerations, not least how much support we take from an AI system.

In summary, using AI wisely means it’s possible to closely predict the people who will be the best in the specific roles available. They’re also likely to be the most engaged – which supports productivity and retention. There is no doubt, AI will be used more and more in assessment, but getting it right is essential for all organisations.

Richard Justenhoven is the product development director within Aon's Assessment Solutions. A leading organisational psychologist, Richard is an acknowledged expert in the design, implementation and evaluation of online assessments and a sought after speaker about such topics.

Latest news

Helen Wada: Why engagement initiatives fail without human-centric leadership

Workforce engagement has become a hot topic across the boardroom and beyond, particularly as hybrid working practices have become the norm.

Recruiters warned to move beyond ‘post and pray’ as passive talent overlooked

Employers risk missing most candidates by relying on job boards as hiring methods struggle to deliver quality applicants.

Employment tribunal roundup: Appeal fairness, dismissal reasoning, discrimination tests and religious belief clarified

Decisions examine appeal failures, dismissal reasoning, discrimination claims and religious belief, offering practical guidance on fairness, causation and proportionality.

Fears of AI cheating in hiring ‘overblown’ as employers urged to rethink assessments

Employers may be overstating concerns about AI misuse in recruitment as evidence of candidate manipulation remains limited.
- Advertisement -

More employees use workplace health benefits, but barriers still limit access

Many workers struggle to access employer healthcare support due to confusion, costs and unclear processes.

Gender pay gap in tech widens to nine-year high as AI roles drive salaries

Women in IT earn less as salaries rise faster in male-dominated AI and cybersecurity roles, widening pay differences.

Must read

Andrew Harvey: HR & Comms, where’s the line?

Andrew Harvey discusses how HR can collaborate with its PR teams to ensure better communication with its employees and help to improve employee engagement within the company.

Sara Sabin: How AI is eroding critical thinking and creativity at work

Will AI free us from mundane tasks? Will it make us more productive, more creative? Or is it quietly reshaping what it means to be human at work?
- Advertisement -

You might also likeRELATED
Recommended to you