Fiona Morgan: Ensuring fairness and transparency in AI-based recruitment

-

Without proper safeguards, the use of AI can undermine fairness and entrench bias, exposing employers to legal risk.

One of the biggest risks associated with AI-based recruitment is bias from the algorithm. AI systems are only as objective as the data that they’re trained on. Where algorithms are designed to identify successful candidates based on historical hiring data, they may replicate existing inequalities.

The dangers of algorithmic bias

For example, if an organisation’s past workforce is dominated by white males, an AI system may learn to favour those characteristics, systematically disadvantaging candidates of another gender or from different backgrounds. This can result in unlawful discrimination and an employer’s ignorance of any algorithmic bias giving rise to discrimination will not necessarily relieve them from liability.

HRreview Logo

Get our essential weekday HR news and updates.

This field is for validation purposes and should be left unchanged.
Keep up with the latest in HR...
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

 

Using AI to assess video interviews presents particular concerns. Some tools claim to analyse facial expressions, tone of voice or body language to predict how suitable they are for the job. There’s questions about the scientific reliability of the technology, but on top of that, these systems risk discriminating against candidates who do not conform to what the AI system considers are “typical” behavioural norms. This includes neurodivergent individuals or people from different cultural backgrounds.

The importance of data protection

In the UK and across Europe, the use of AI in recruitment must also be considered through the lens of data protection law. Feeding CVs, application forms or video recordings into AI systems constitutes the processing of personal data and, in some cases, special category data (which is sensitive personal data such as racial or ethnic origin, politic or religious beliefs, health, sexual orientation, etc).

Employers are still responsible for compliance with UK GDPR, even when third-party AI providers are used. Transparency is the most important thing here. Candidates must be informed that AI is being used, what data is collected, how it will be processed and for what purpose.

UK GDPR limits the circumstances in which employers can use automated decision making. Individuals also have the right to challenge decisions based solely on automated processing. If AI is used as a pre-screening tool that automatically rejects candidates without meaningful human involvement, employers must ensure that applicants have the opportunity to seek human review of those decisions.

Employers are also required under UK GDPR to carry out a Data Protection Impact Assessment before introducing any AI-based recruitment system to identify and mitigate potential risks before they become a problem.

The need for safeguards

It’s also important to carry out due diligence on AI providers. Employers should ask suppliers how they test for and mitigate bias, what safeguards are in place to protect personal data and whether they can provide evidence of compliance with equality and data protection laws.

These obligations should be reflected in the contract between the business and the AI provider, with responsibilities placed on providers to cooperate with audits, information requests and regulatory investigations.

We are also seeing generative AI being used informally during the recruitment process. Recruiters may be tempted to search for candidates using tools such as ChatGPT to gain additional background information. This practice is extremely risky.

Generative AI can produce inaccurate or fabricated information, and relying on such material could lead to unlawful decisions, particularly if it reveals or invents information about protected characteristics, trade union activity or political views, for example.

Human oversight matters

The most effective safeguard against these risks is consistent human oversight. AI should support, not replace, human decision-making. Employers should regularly review recruitment outcomes to identify patterns that may indicate bias and conduct equality monitoring where possible. Any recommendations produced by AI systems should be double-checked by trained staff who understand both the technology and the legal framework within which it operates.

AI can have huge benefits in recruitment, but fairness and transparency cannot be automated. By combining clear communication with candidates and meaningful human involvement at every stage, employers can benefit from AI while meeting their legal obligations and promoting genuinely inclusive hiring practices.

As the use of AI in recruitment continues to grow, employers should prioritise reviewing their hiring processes. Taking early legal advice and ensuring appropriate safeguards are in place will help businesses benefits from AI while minimising legal and reputational risk.

If you are considering introducing AI into your recruitment process, or already rely on automated tools, specialist employment law guidance can help ensure your approach remains fair, transparent and compliant.

Head of Employment at 

Fiona Morgan is a senior employment lawyer and Head of Employment at Arbor Law, with 17+ years’ experience advising corporate clients on the full range of contentious and non-contentious employment matters.

Fiona was previously a Partner and UK co-head of employment at Kennedys, and a consulting senior employment lawyer at Taylor Wessing. She is experienced in tribunal and civil court litigation, TUPE, redundancies and reorganisations, restrictive covenants, policies and contracts, settlement agreements, and supporting transactions. Fiona also delivers practical employment law training for HR teams and managers.

Latest news

Job losses to hit manufacturing and retail as growth slows and energy costs rise

Manufacturing, retail and construction employers are expected to scale back hiring as businesses face mounting cost pressures and weaker consumer demand.

Inefficient staff training ‘costs UK businesses £416m a year’

UK employers are losing millions of working hours to inefficient workplace learning, limiting skills development and productivity across key sectors.

Business failures leave £32.6m in unpaid pensions as insolvencies surge

Rising company insolvencies are leaving millions in workplace pension contributions unpaid, putting pressure on retirement savings across the UK.

Kevin Hähnlein: Why digital equity is the next frontier for AI and productivity

As governments and private sectors accelerate AI deployment, the urgency to reach the non-desk workforce has never been greater.
- Advertisement -

Young workers quitting jobs because they feel unable to speak up, employers warned

Young workers are considering leaving jobs because they do not feel psychologically safe at work, raising concerns during Mental Health Awareness Week.

Brené Brown on workplace trust

"There's not a CEO alive that doesn't know that there's nothing harder than building trust on teams."

Must read

Kimberley Barrett-St.Vall: Mandatory vaccinations – the employment challenge beyond carers

"Mandating vaccines is largely incompatible with the existing legal infrastructure, creating a myriad of potential missteps for employers."

Chris Welford: Those Difficult Conversations

We can all recall times when we have met...
- Advertisement -

You might also likeRELATED
Recommended to you