One in six UK employers now say they expect AI to reduce the size of their workforce over the next year. Large private sector organisations, and junior managerial, clerical, administrative and professional roles, appear to be particularly exposed.
At the same time, surveys suggest a growing share of workers are worried that AI will lead to job losses, with entry-level and junior posts often perceived as being first in the firing line.
Against that backdrop, it is not surprising that the story of Kathryn Sullivan has understandably struck a nerve. Sullivan, a long-serving employee of Commonwealth Bank of Australia, spent years helping to train the bank’s customer service chatbot, only to see her own role made redundant once the technology was rolled out. It is a vivid example of a more general fear that AI deployment can feel as though it is being done to employees, rather than with them.
For UK employers, the question is no longer whether AI will reshape work. It is how to manage AI-driven change in a way that is legally robust, practically workable and capable of sustaining a loyal, motivated workforce.
Existing law already bites on AI-driven job losses
AI does not sit outside employment law. When automation or AI systems reduce the need for certain types of work, the existing redundancy framework still applies.
In practice, employers need to be able to show:
- a genuine redundancy situation, such as a reduced need for employees to carry out particular tasks because AI systems now perform that work;
- fair and meaningful individual and, where the numbers demand it, collective consultation;
- objective, non-discriminatory selection criteria; and
- proper consideration of suitable alternative roles, including new, more tech-enabled posts that AI adoption may create.
Collective consultation is triggered if an employer proposes twenty or more redundancies at one establishment within a ninety day period. In AI programmes that affect whole teams or layers of junior roles, employers may hit that threshold quickly.
In the event of challenge, tribunals are likely to focus on the quality of the employer’s process rather than the business case for AI itself. Where employers rely on algorithmically generated performance metrics, productivity dashboards or attendance data in redundancy selection, they should expect to be asked:
- how those metrics were generated;
- whether staff understood how they were being assessed; and
- what opportunity they had to challenge or correct errors.
If those questions cannot be answered convincingly, AI-linked reorganisations can quickly turn into unfair dismissal claims.
Equality risks: who carries the burden of automation?
AI will not fall evenly on all shoulders. Recent research looking at the actual tasks that make up different jobs in knowledge and service sectors shows that back-office, entry-level and part-time roles are more likely to be automated. Women, younger workers and low to medium earners are over-represented in those jobs, so they stand particularly exposed to automation and AI.
If an AI-driven restructuring lands mainly on those groups, employers may face claims of indirect discrimination, unless they can show that their approach is a proportionate way of achieving a legitimate aim. The fact that AI has made certain tasks more efficient will not, by itself, justify a disproportionate impact on protected groups.
There is also the risk that the tools themselves introduce or entrench bias. Algorithms trained on skewed or incomplete data can lead to less favourable treatment of people with protected characteristics in recruitment, performance management or redundancy selection.
Data protection and monitoring: AI as a workplace watcher
AI is increasingly woven into systems that monitor performance, productivity and behaviour, from keystroke and screen-time tracking to call analytics, biometric access control and AI-based recruitment tools.
The ICO has been clear that such tools must be necessary, proportionate and transparent. There must be a lawful basis for the monitoring, and a robust data protection impact assessment where there is a high risk to workers’ rights.
Where AI-generated data is later used to support dismissals or redundancy selection, employers will need to show not only that the underlying monitoring complied with data protection law, but also that workers understood how their data would be used and had routes to challenge inaccurate or unfair assessments.
What are the regulators saying?
Regulators are beginning to join the dots between AI, work and existing protections.
Acas has encouraged employers to consult workers before introducing AI, to develop clear policies on its use, and to reassure staff that human judgement will continue to matter in most decisions. The ICO has issued guidance on monitoring, employment practices and AI-based recruitment, stressing transparency, explainability and the need for meaningful human review of important decisions.
The Equality and Human Rights Commission is focusing on AI and digital services as a strategic priority, and has produced guidance on assessing the equality impact of algorithmic systems.
Alongside this, there is early discussion of AI-specific regulation that could touch workplace decision-making. Any comprehensive AI-focussed employment framework is likely to be some way off. In the meantime, employers must navigate how existing law applies to AI-driven decisions, rather than assume this area is unregulated.
Opportunity with guardrails – or familiar debates in new clothes?
It would be misleading to portray AI purely as a job-destroying force. Some employers are already investing heavily in retraining and redeployment. PwC, for example, has framed its approach around ‘retain, retrain and transform’, using AI to identify skills gaps and support workforce transition rather than simply to reduce headcount. There is an argument that, if it is properly controlled, AI can take over more of the boring, repetitive tasks, make flexible working easier and help create more interesting, better paid jobs.
However, a number of commentators have warned that the same technology could widen inequality and concentrate the gains if left solely to market forces. Recent analysis from the United States, highlighted by broadcaster Katty Kay, points to a marked fall in employment among young workers in AI-vulnerable fields since 2022 and a sharp drop in postings for entry-level roles. A sizeable minority of professionals also report that the pace of AI-related change is affecting their wellbeing.
For some workers, this will feel like a re-heating of many of the discussions about globalisation in the 1990s: lots of talk of overall economic benefit, but not enough attention on which workers would lose out and what would happen to them next. In that light, ‘opportunity with guardrails’ may land as globalisation rebranded for the AI age, with the safety rail bolted on only after the ride has begun.
Reskilling initiatives, meanwhile, tend to be the exception rather than the rule. If ‘retain and retrain’ is to be more than a slogan, it will need to move from corporate case study to mainstream practice, supported by government skills policy and closer collaboration between employers, education providers and other stakeholders.
Navigating displacement well: lessons for UK employers
What, then, can employers learn from the emerging evidence and from stories such as Kathryn Sullivan’s experience at Commonwealth Bank of Australia?
First, AI deployment decisions should be tied firmly into workforce planning from the outset. If an AI programme is likely to reduce the need for certain tasks, HR should be involved early in mapping which roles will be affected, what alternative work might exist and how long a realistic transition period might be. Surprises erode trust.
Secondly, process matters. Even where job losses cannot be avoided, employers that invest in early, honest communication, meaningful consultation and genuine efforts at redeployment usually fare better on employee relations, litigation risk and reputation. The law requires a fair process; experience suggests it is also sound management.
Thirdly, governance should not be left to technologists alone. Cross-functional oversight, involving HR, legal, data protection and, where appropriate, ethics committees, is increasingly essential to stress-test AI use cases for equality, privacy and employment law risks before they are rolled out.
Finally, there is the human question. Employees are being asked to engage with, and in many cases help to train, the very systems that may alter or even remove their roles. If they see colleagues treated as expendable when those systems come online, they may be less likely to embrace further change. If, instead, they see employers investing in their skills, involving them in design and offering fair treatment where roles do disappear, they are more likely to participate constructively.
That is not simply a matter of morale. A sustained loss of trust makes it harder to retain key people, harder to attract new talent into AI-exposed roles and harder to build the skills base needed for future change. An employer that mishandles AI-driven job cuts may find that, when it next needs to recruit or reskill at scale, the very workforce it hoped to rely on has quietly gone elsewhere.
Kathryn Sullivan’s story arises in a different legal system, but the underlying warning travels well. If AI-driven workforce change is handled as a narrow cost-cutting exercise, the reputational and employee relations damage can be considerable, and may even chill the adoption of the very tools organisations are seeking to embrace. The task for UK employers is to harness AI’s potential while ensuring that, when roles are displaced, people are not left feeling they have helped to train their own replacements only to be quietly shown the door.
Daniel Stander is an employment lawyer at Vedder and a member of the firm’s Labor and Employment practice group in the London office.
Daniel prides himself on being a “next generation” lawyer. He takes the time to get to know his clients’ businesses and the employment law and HR issues they face and uses that insight to provide straightforward advice without unnecessary legal jargon to provide the best practical and responsive solutions for business owners and HR teams.
