UK financial services employees call for AI transparency and safeguards

-

The study, from communications data and intelligence provider Smarsh, found that over a third (37%) of financial services employees in the UK say they frequently use public AI tools such as ChatGPT or Microsoft 365 Copilot in their daily work. However, a majority (55%) report that they have never received formal training on how to use these technologies.

With the widespread use of AI, transparency and compliance are now key concerns. Nearly 70 percent of respondents said they would feel more confident using AI tools if their outputs were monitored and captured for compliance. Yet 38 percent are unsure whether their organisation currently has systems in place to do this, and 21 percent say their employer definitively does not.

Compliance concerns over AI use and agent deployment

The report reveals that AI is not only being used to support internal productivity but is also being deployed in public-facing applications. Almost half (43%) of surveyed employees said their firm uses AI Agents – defined as autonomous systems capable of completing tasks without human oversight – for customer communications, including personalised financial advice. A further 22 percent reported the use of such agents in investment activities like portfolio management or trade recommendations.

HRreview Logo

Get our essential weekday HR news and updates.

This field is for validation purposes and should be left unchanged.
Keep up with the latest in HR...
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

 

However, concerns about regulatory compliance persist. A third (31%) of employees expressed doubts about their organisation’s ability to meet or apply the correct regulatory standards to AI Agents. In addition, 29 percent said they were unsure where potentially sensitive information was going when these tools were used.

Tom Padgett, President of Enterprise Business at Smarsh, said, “AI adoption in financial services has accelerated rapidly, with employees embracing these tools to boost productivity. But with innovation comes responsibility. Firms must establish the right guardrails to prevent data leaks and misconduct. The good news is that employees are on board – welcoming a safe, compliant AI environment that builds trust and unlocks long-term growth.”

AI growth outpacing oversight structures

The findings come as the Financial Conduct Authority (FCA) prepares to launch its AI live testing service, a programme intended to support the implementation of customer-facing AI tools within the sector. The regulatory development highlights the increasing focus on ensuring AI adoption aligns with consumer protection and compliance requirements.

Paul Taylor, Vice President of Product at Smarsh, raised concerns about uncontrolled use of public AI tools in regulated environments.

“Using public AI tools without controls is digital negligence,” he said. “You’re effectively feeding your crown jewels into a black box you don’t own, where the data can’t be deleted, and the logic can’t be explained. It’s reckless. Private tools like Microsoft 365 Copilot and ChatGPT Enterprise are a step in the right direction. Still, if companies aren’t actively capturing and auditing usage, they’re not securing innovation – they’re sleepwalking into a compliance nightmare.”

Alessandra Pacelli is a journalist and author contributing to HRreview, where she covers topics including labour market trends, employment costs, and workplace issues.

Latest news

Helen Wada: Why engagement initiatives fail without human-centric leadership

Workforce engagement has become a hot topic across the boardroom and beyond, particularly as hybrid working practices have become the norm.

Recruiters warned to move beyond ‘post and pray’ as passive talent overlooked

Employers risk missing most candidates by relying on job boards as hiring methods struggle to deliver quality applicants.

Employment tribunal roundup: Appeal fairness, dismissal reasoning, discrimination tests and religious belief clarified

Decisions examine appeal failures, dismissal reasoning, discrimination claims and religious belief, offering practical guidance on fairness, causation and proportionality.

Fears of AI cheating in hiring ‘overblown’ as employers urged to rethink assessments

Employers may be overstating concerns about AI misuse in recruitment as evidence of candidate manipulation remains limited.
- Advertisement -

More employees use workplace health benefits, but barriers still limit access

Many workers struggle to access employer healthcare support due to confusion, costs and unclear processes.

Gender pay gap in tech widens to nine-year high as AI roles drive salaries

Women in IT earn less as salaries rise faster in male-dominated AI and cybersecurity roles, widening pay differences.

Must read

Rebecca Berry: All BBC presenters are equal, but some more than others

"Employers should heed the tribunal’s warning and implement clear processes."

Vanessa Manipon: Why businesses must continue evolving their hybrid model

Venessa Manipon offers steps to make the hybrid experiment a reality
- Advertisement -

You might also likeRELATED
Recommended to you