HRreview 20 Years
This field is for validation purposes and should be left unchanged.
Subscribe for weekday HR news, opinion and advice.
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

Security concerns as professionals share confidential data with AI platforms

-

A recent study by application security SaaS company Indusface found that nearly 2 in 5 professionals surveyed (38%) have shared confidential data with AI platforms without their employer’s permission.

This raises concerns about data security, as the storage and handling of such information by AI tools remain unclear.

AI platforms like ChatGPT are widely used in workplaces to assist with tasks such as analysing data, refining reports, and drafting presentations. Over 80 percent of professionals in Fortune 500 enterprises rely on these tools. However, Indusface’s findings show that 11 percent of the data entered into AI tools is strictly confidential, such as internal business strategies.

Personal details, work-related files, client information, financial data, passwords, and intellectual property are among the most frequently shared forms of information. Indusface calls for better cybersecurity training to upskill employees on the safe use of AI and prevent breaches that could compromise individuals and businesses.

HRreview Logo

Get our essential weekday HR news and updates.

This field is for validation purposes and should be left unchanged.
Keep up with the latest in HR...
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

 

Work-Related Files and Confidential Data

Work-related files and documents are one of the most commonly shared types of data with AI tools. Professionals often upload internal business files, including confidential strategies, into generative AI platforms. Indusface’s research shows that many users are unaware of how these platforms process or store this data, which may be used to train future AI models.

The report recommends that employees remove any sensitive details when entering data into AI tools to minimise the risk of unintentional exposure. This is particularly important given the increasing reliance on AI in high-stakes environments like large enterprises.

Personal and Client Information

Personal data, such as names, addresses, and contact details, is also frequently shared with AI platforms. The study revealed that 30 percent of professionals believe protecting their personal data is not worth the effort.

Client and employee information, which often falls under strict regulatory requirements, is also being entered into AI systems. Business leaders should exercise caution when using AI for tasks involving payroll, performance reviews, or sensitive client data. Breaches involving these types of information could lead to regulatory violations, legal action, or significant reputational harm.

Financial Data Vulnerabilities

Financial information is another area of concern. Many professionals rely on large language models (LLMs) for tasks such as generating financial analyses or handling customer data. These models are often trained using data scraped from the web, which can include personally identifiable information (PII) obtained without users’ consent.

Indusface advises organisations to ensure that devices interacting with AI systems are secure and equipped with up-to-date antivirus protection. This precaution can help safeguard sensitive financial data before it is shared with AI platforms.

Sharing Passwords and Access Credentials

The study also highlights the dangers of sharing passwords and access credentials with AI platforms. Many professionals mistakenly rely on AI for insights or assistance without considering the risks to their accounts. Indusface emphasises the importance of using strong, unique passwords and enabling two-factor authentication to prevent unauthorised access.

As AI systems are not designed to securely store passwords, organisations must educate their employees about safe password practices to avoid compromising multiple accounts.

Intellectual Property and Codebase Security

Developers are increasingly turning to AI tools for coding assistance, but this practice poses significant risks to company intellectual property. If proprietary source code is entered into an AI platform, it could be stored or used to train future AI models. This raises concerns about the potential exposure of trade secrets and other sensitive business information.

Organisations are urged to establish clear guidelines for developers and employees when using AI platforms, ensuring that intellectual property is not inadvertently shared or stored externally.

As AI platforms become more integrated into workplace processes, the risks associated with their use are becoming more apparent. By implementing robust cybersecurity protocols and educating employees on safe practices, organisations can harness the benefits of AI tools while safeguarding sensitive information.

Alessandra Pacelli is a journalist and author contributing to HRreview, an HR news and opinion publication, where she covers topics including labour market trends, employment costs, and workplace issues. She is a journalism graduate and self-described lifelong dog lover who has also written for Dogs Today magazine since 2014.

Latest news

Felicia Williams: Why ‘shadow work’ is quietly breaking your people strategy

Employees are losing seven hours a week to tasks that fall outside their core job description. For HR leaders, that’s the kind of stat that keeps you up at night.

Redundancies rise as 327,000 job losses forecast for 2026

UK job losses are set to rise again as redundancy warnings hit post-pandemic highs, with employers cutting roles amid rising costs and economic pressure.

Rise of ‘sickfluencers’ and AI advice sparks concern over attitudes to work

Online influencers and AI tools are shaping how people approach illness and employment, heaping pressure on employers.

‘Silent killer’ dust linked to 500 construction deaths a year as 600,000 workers face exposure

Hundreds of UK construction workers die each year from silica dust exposure as a new campaign calls for stronger workplace protections.
- Advertisement -

Leaders ‘overestimate’ how much workers use AI

Firms may be misreading workforce readiness for artificial intelligence, as frontline staff report far lower day-to-day adoption than executives expect.

Cost-of-living pressures ‘keep unhappy workers in their jobs’

Many say economic pressures are forcing them to remain in jobs they would otherwise leave, as pay and financial stability dominate career decisions.

Must read

Michelle Carson: National Apprenticeship Week – why the ‘talent shortage’ narrative is nonsense

Apprenticeships have been rebranded and elevated in status compared with how they were viewed historically, and represent a significant investment.

Hannah Power: Bridging the communication gap with your employees

Even if your team is working together every day, communication breakdown can still occur as a result of teams being siloed.
- Advertisement -

You might also likeRELATED
Recommended to you