The growing use of artificial intelligence (AI) tools among UK financial services workers is fueling demand for greater oversight and compliance mechanisms, according to new research.
The study, from communications data and intelligence provider Smarsh, found that over a third (37%) of financial services employees in the UK say they frequently use public AI tools such as ChatGPT or Microsoft 365 Copilot in their daily work. However, a majority (55%) report that they have never received formal training on how to use these technologies.
With the widespread use of AI, transparency and compliance are now key concerns. Nearly 70 percent of respondents said they would feel more confident using AI tools if their outputs were monitored and captured for compliance. Yet 38 percent are unsure whether their organisation currently has systems in place to do this, and 21 percent say their employer definitively does not.
Compliance concerns over AI use and agent deployment
The report reveals that AI is not only being used to support internal productivity but is also being deployed in public-facing applications. Almost half (43%) of surveyed employees said their firm uses AI Agents – defined as autonomous systems capable of completing tasks without human oversight – for customer communications, including personalised financial advice. A further 22 percent reported the use of such agents in investment activities like portfolio management or trade recommendations.
However, concerns about regulatory compliance persist. A third (31%) of employees expressed doubts about their organisation’s ability to meet or apply the correct regulatory standards to AI Agents. In addition, 29 percent said they were unsure where potentially sensitive information was going when these tools were used.
Tom Padgett, President of Enterprise Business at Smarsh, said, “AI adoption in financial services has accelerated rapidly, with employees embracing these tools to boost productivity. But with innovation comes responsibility. Firms must establish the right guardrails to prevent data leaks and misconduct. The good news is that employees are on board – welcoming a safe, compliant AI environment that builds trust and unlocks long-term growth.”
AI growth outpacing oversight structures
The findings come as the Financial Conduct Authority (FCA) prepares to launch its AI live testing service, a programme intended to support the implementation of customer-facing AI tools within the sector. The regulatory development highlights the increasing focus on ensuring AI adoption aligns with consumer protection and compliance requirements.
Paul Taylor, Vice President of Product at Smarsh, raised concerns about uncontrolled use of public AI tools in regulated environments.
“Using public AI tools without controls is digital negligence,” he said. “You’re effectively feeding your crown jewels into a black box you don’t own, where the data can’t be deleted, and the logic can’t be explained. It’s reckless. Private tools like Microsoft 365 Copilot and ChatGPT Enterprise are a step in the right direction. Still, if companies aren’t actively capturing and auditing usage, they’re not securing innovation – they’re sleepwalking into a compliance nightmare.”