HRreview 20 Years
This field is for validation purposes and should be left unchanged.
Subscribe for weekday HR news, opinion and advice.
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

Fiona Morgan: Ensuring fairness and transparency in AI-based recruitment

-

Without proper safeguards, the use of AI can undermine fairness and entrench bias, exposing employers to legal risk.

One of the biggest risks associated with AI-based recruitment is bias from the algorithm. AI systems are only as objective as the data that they’re trained on. Where algorithms are designed to identify successful candidates based on historical hiring data, they may replicate existing inequalities.

The dangers of algorithmic bias

For example, if an organisation’s past workforce is dominated by white males, an AI system may learn to favour those characteristics, systematically disadvantaging candidates of another gender or from different backgrounds. This can result in unlawful discrimination and an employer’s ignorance of any algorithmic bias giving rise to discrimination will not necessarily relieve them from liability.

HRreview Logo

Get our essential weekday HR news and updates.

This field is for validation purposes and should be left unchanged.
Keep up with the latest in HR...
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

 

Using AI to assess video interviews presents particular concerns. Some tools claim to analyse facial expressions, tone of voice or body language to predict how suitable they are for the job. There’s questions about the scientific reliability of the technology, but on top of that, these systems risk discriminating against candidates who do not conform to what the AI system considers are “typical” behavioural norms. This includes neurodivergent individuals or people from different cultural backgrounds.

The importance of data protection

In the UK and across Europe, the use of AI in recruitment must also be considered through the lens of data protection law. Feeding CVs, application forms or video recordings into AI systems constitutes the processing of personal data and, in some cases, special category data (which is sensitive personal data such as racial or ethnic origin, politic or religious beliefs, health, sexual orientation, etc).

Employers are still responsible for compliance with UK GDPR, even when third-party AI providers are used. Transparency is the most important thing here. Candidates must be informed that AI is being used, what data is collected, how it will be processed and for what purpose.

UK GDPR limits the circumstances in which employers can use automated decision making. Individuals also have the right to challenge decisions based solely on automated processing. If AI is used as a pre-screening tool that automatically rejects candidates without meaningful human involvement, employers must ensure that applicants have the opportunity to seek human review of those decisions.

Employers are also required under UK GDPR to carry out a Data Protection Impact Assessment before introducing any AI-based recruitment system to identify and mitigate potential risks before they become a problem.

The need for safeguards

It’s also important to carry out due diligence on AI providers. Employers should ask suppliers how they test for and mitigate bias, what safeguards are in place to protect personal data and whether they can provide evidence of compliance with equality and data protection laws.

These obligations should be reflected in the contract between the business and the AI provider, with responsibilities placed on providers to cooperate with audits, information requests and regulatory investigations.

We are also seeing generative AI being used informally during the recruitment process. Recruiters may be tempted to search for candidates using tools such as ChatGPT to gain additional background information. This practice is extremely risky.

Generative AI can produce inaccurate or fabricated information, and relying on such material could lead to unlawful decisions, particularly if it reveals or invents information about protected characteristics, trade union activity or political views, for example.

Human oversight matters

The most effective safeguard against these risks is consistent human oversight. AI should support, not replace, human decision-making. Employers should regularly review recruitment outcomes to identify patterns that may indicate bias and conduct equality monitoring where possible. Any recommendations produced by AI systems should be double-checked by trained staff who understand both the technology and the legal framework within which it operates.

AI can have huge benefits in recruitment, but fairness and transparency cannot be automated. By combining clear communication with candidates and meaningful human involvement at every stage, employers can benefit from AI while meeting their legal obligations and promoting genuinely inclusive hiring practices.

As the use of AI in recruitment continues to grow, employers should prioritise reviewing their hiring processes. Taking early legal advice and ensuring appropriate safeguards are in place will help businesses benefits from AI while minimising legal and reputational risk.

If you are considering introducing AI into your recruitment process, or already rely on automated tools, specialist employment law guidance can help ensure your approach remains fair, transparent and compliant.

Head of Employment at 

Fiona Morgan is a senior employment lawyer and Head of Employment at Arbor Law, with 17+ years’ experience advising corporate clients on the full range of contentious and non-contentious employment matters.

Fiona was previously a Partner and UK co-head of employment at Kennedys, and a consulting senior employment lawyer at Taylor Wessing. She is experienced in tribunal and civil court litigation, TUPE, redundancies and reorganisations, restrictive covenants, policies and contracts, settlement agreements, and supporting transactions. Fiona also delivers practical employment law training for HR teams and managers.

Latest news

Co-op chief executive steps down after ‘toxic culture’ claims

Senior staff concerns over fear and silence at major UK retailer coincide with a leadership exit after a turbulent year.

Lauren Webb: Leadership lessons – we rise by lifting (or training) others

The way organisations prepare new managers decides whether they grow into talent multipliers, or retreat towards helicopter parenting.

Drivers ‘asleep at the wheel’ as TfL insists on ‘high standards’

London bus drivers report exhaustion and poor working conditions as TfL defends standards and says concerns are investigated.

Leading people and culture across a global luxury hospitality brand

A senior HR leader at a global hotel group explains how culture, leadership and technology are shaping the employee experience across international operations.
- Advertisement -

Public contracts to favour firms that deliver jobs and apprenticeships

UK firms bidding for public contracts must now show how they will create jobs, apprenticeships and local economic value under new government rules.

Revealed: Women sell themselves £9,000 short before they even apply for jobs

British women are applying for lower-paid roles and setting lower salary expectations than men, new figures reveal.

Must read

Should business be forced by government to disclose how much employees earn?

In the United States President Obama recently announced that the American government will be collecting detailed salary data by race and gender for every business in the country with more than 100 employees.

Deborah Lewis: Who comes first, the employee or the customer?

I've been mulling over this piece in the FT...
- Advertisement -

You might also likeRELATED
Recommended to you