With news that JPMorgan Chase is allowing employees to use an artificial intelligence system to assist them with writing staff performance reviews, employers should consider the implications of such moves in the workplace.
There is no legislation which specifically governs the way AI is used in the workplace. However, there are several key legal risks that employers need to be aware of.
Firstly, AI tools often use complex algorithms as part of their reasoning processes. An over-reliance on AI to conduct performance reviews risks undermining employees’ trust in the process – particularly if it appears that there is a lack of human oversight or transparency.
If managers are unable to fully justify the rationale behind any conclusions reached, this can ultimately damage the relationship of mutual trust and confidence. The potential loss of personal relationships between employees and managers may also limit the opportunity to resolve disagreements in such cases informally.
Performance reviews are commonly used as the basis for capability procedures (which can in some cases lead to dismissal). If AI plays a significant role in this or there is a lack of transparency as to how a decision to dismiss an employee was reached, this could leave employers open to allegations of procedural unfairness.
Related to this, AI systems are ultimately reliant on the data provided to them. Therefore, they have the potential to unintentionally perpetuate any bias within the data and, in some cases, risk generating discriminatory outcomes which employers can ultimately be liable for under the Equality Act 2010.
Lack of flexibility and data protection
Some AI tools also may not have the flexibility to account for each employee’s circumstances. For example, an AI tool that is reliant on statistical performance data to generate a report may risk overlooking the fact that a particular employee’s performance is affected by a disability.
This could leave the employer exposed to the risk of a claim for discrimination arising from a disability or failure to make reasonable adjustments. The risk will be exacerbated, both legally and practically, if managers are not equipped with the necessary understanding to analyse the information, justify any conclusions reached and/or make appropriate adjustments.
Further, the use of AI has consequences for employers under data protection legislation. In particular, the use of AI in performance management is likely to involve the processing of employee personal data. Employers need to ensure that all personal data is processed fairly, lawfully and transparently in accordance with the UK GDPR principles.
Employees have the right to be informed of any automated decision-making and the right not to be subject to decisions based solely on automated data processing (subject to exceptions). There may also be security risks when using third party AI tools to process personal data.
The safe use of AI
Care must be taken to ensure that AI does not replace human decision-making. Managers and HR professionals should be trained to effectively interpret AI-generated reports, understand its limitations and exercise genuine independent judgment on a case-by-case basis.
Employers should avoid placing sole reliance on AI to make decisions about an employee’s performance (including promotions and dismissals) and ensure that any significant decisions are taken by human managers.
Employers should regularly assess the potential risks that may arise from the use of AI in decision-making processes, including the risk of unintended discrimination, and how these may be managed. For example, employers may need to consider how to allow for flexibility when reviewing the performance of an employee whose work is affected by a long-term health condition.
From a data protection perspective, employers should carry out a Data Protection Impact Assessment – particularly where AI is likely to play a significant role in decision-making. It is important to establish a lawful basis for the processing of personal data using AI and keep employees informed of how their personal data is being used. The processing of personal data should be minimised where possible, and appropriate measures should be taken to safeguard against the misuse or reprocessing of personal data (particularly when using third party AI tools).
Employee privacy notices and staff policies will need to be updated to cover the use of AI, and staff should ideally be kept informed about any future proposals in the interests of transparency.
Having joined Birketts in November 2024, Alex provides specialist employment law advice to managers, HR departments and in-house legal teams, as well as senior employees. He frequently provides expert commentary to the media and his insights have been published in both national and industry publications.
Since 2022, Alex has specialised in contentious and non-contentious employment law matters, including settlement agreements, TUPE, grievances, disciplinary and performance management procedures, and employment tribunal litigation with a particular focus on defending claims of discrimination. Alex has experience of working closely with public sector organisations, ranging from large public authorities to academy trusts and maintained schools.
