Fiona Morgan: Ensuring fairness and transparency in AI-based recruitment

-

Without proper safeguards, the use of AI can undermine fairness and entrench bias, exposing employers to legal risk.

One of the biggest risks associated with AI-based recruitment is bias from the algorithm. AI systems are only as objective as the data that they’re trained on. Where algorithms are designed to identify successful candidates based on historical hiring data, they may replicate existing inequalities.

The dangers of algorithmic bias

For example, if an organisation’s past workforce is dominated by white males, an AI system may learn to favour those characteristics, systematically disadvantaging candidates of another gender or from different backgrounds. This can result in unlawful discrimination and an employer’s ignorance of any algorithmic bias giving rise to discrimination will not necessarily relieve them from liability.

Get our essential weekday HR news and updates.

This field is for validation purposes and should be left unchanged.
Keep up with the latest in HR...
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

 

Using AI to assess video interviews presents particular concerns. Some tools claim to analyse facial expressions, tone of voice or body language to predict how suitable they are for the job. There’s questions about the scientific reliability of the technology, but on top of that, these systems risk discriminating against candidates who do not conform to what the AI system considers are “typical” behavioural norms. This includes neurodivergent individuals or people from different cultural backgrounds.

The importance of data protection

In the UK and across Europe, the use of AI in recruitment must also be considered through the lens of data protection law. Feeding CVs, application forms or video recordings into AI systems constitutes the processing of personal data and, in some cases, special category data (which is sensitive personal data such as racial or ethnic origin, politic or religious beliefs, health, sexual orientation, etc).

Employers are still responsible for compliance with UK GDPR, even when third-party AI providers are used. Transparency is the most important thing here. Candidates must be informed that AI is being used, what data is collected, how it will be processed and for what purpose.

UK GDPR limits the circumstances in which employers can use automated decision making. Individuals also have the right to challenge decisions based solely on automated processing. If AI is used as a pre-screening tool that automatically rejects candidates without meaningful human involvement, employers must ensure that applicants have the opportunity to seek human review of those decisions.

Employers are also required under UK GDPR to carry out a Data Protection Impact Assessment before introducing any AI-based recruitment system to identify and mitigate potential risks before they become a problem.

The need for safeguards

It’s also important to carry out due diligence on AI providers. Employers should ask suppliers how they test for and mitigate bias, what safeguards are in place to protect personal data and whether they can provide evidence of compliance with equality and data protection laws.

These obligations should be reflected in the contract between the business and the AI provider, with responsibilities placed on providers to cooperate with audits, information requests and regulatory investigations.

We are also seeing generative AI being used informally during the recruitment process. Recruiters may be tempted to search for candidates using tools such as ChatGPT to gain additional background information. This practice is extremely risky.

Generative AI can produce inaccurate or fabricated information, and relying on such material could lead to unlawful decisions, particularly if it reveals or invents information about protected characteristics, trade union activity or political views, for example.

Human oversight matters

The most effective safeguard against these risks is consistent human oversight. AI should support, not replace, human decision-making. Employers should regularly review recruitment outcomes to identify patterns that may indicate bias and conduct equality monitoring where possible. Any recommendations produced by AI systems should be double-checked by trained staff who understand both the technology and the legal framework within which it operates.

AI can have huge benefits in recruitment, but fairness and transparency cannot be automated. By combining clear communication with candidates and meaningful human involvement at every stage, employers can benefit from AI while meeting their legal obligations and promoting genuinely inclusive hiring practices.

As the use of AI in recruitment continues to grow, employers should prioritise reviewing their hiring processes. Taking early legal advice and ensuring appropriate safeguards are in place will help businesses benefits from AI while minimising legal and reputational risk.

If you are considering introducing AI into your recruitment process, or already rely on automated tools, specialist employment law guidance can help ensure your approach remains fair, transparent and compliant.

Head of Employment at 

Fiona Morgan is a senior employment lawyer and Head of Employment at Arbor Law, with 17+ years’ experience advising corporate clients on the full range of contentious and non-contentious employment matters.

Fiona was previously a Partner and UK co-head of employment at Kennedys, and a consulting senior employment lawyer at Taylor Wessing. She is experienced in tribunal and civil court litigation, TUPE, redundancies and reorganisations, restrictive covenants, policies and contracts, settlement agreements, and supporting transactions. Fiona also delivers practical employment law training for HR teams and managers.

Latest news

Alison Lucas & Lizzie Bentley Bowers: Why your offboarding process is as vital as onboarding

We know that beginnings shape performance and culture, so we take time to get them right. Endings are often rushed, avoided or delegated to process.

Reward gaps leave part-time and public sector staff ‘at disadvantage’

Unequal access to staff perks leaves part-time and public sector workers less recognised despite strong links between incentives and engagement.

Workplace workouts: simple ways to move more at your desk and boost health and productivity

Long periods at a desk can affect energy, concentration and physical comfort. Claire Small explains how regular movement during the working day can support wellbeing.

Government warned over youth jobs gap after King’s Speech

Ministers face calls for clearer action on youth employment as almost one million young people remain outside education, work or training.
- Advertisement -

UK ‘passes 8 million mental health sick days’ as anxiety and burnout hit younger workers

Anxiety, depression and burnout are driving millions of lost working days as employers face growing calls to improve mental health support.

Employers face growing duty of care pressures as business travel costs surge

Employers are under growing pressure to protect travelling staff as geopolitical instability, rising costs and disruption reshape business travel.

Must read

Rena Rasch: ‘X’ploring the freelance factor: why your best workers may not always be your employees

Freelancers have becoming increasingly popular over recent years - Rena Rasch takes a look at why they have become so important to modern organisations.

HRreview interviews: Charlotte Hallaways on HR networking

In spite of the ever-growing availability of online networking tools, face-to-face contact remains the preferred way for professionals to network. We've interviewed Charlotte Hallaways to tell us more.
- Advertisement -

You might also likeRELATED
Recommended to you