Security concerns as professionals share confidential data with AI platforms

-

A recent study by application security SaaS company Indusface found that nearly 2 in 5 professionals surveyed (38%) have shared confidential data with AI platforms without their employer’s permission.

This raises concerns about data security, as the storage and handling of such information by AI tools remain unclear.

AI platforms like ChatGPT are widely used in workplaces to assist with tasks such as analysing data, refining reports, and drafting presentations. Over 80 percent of professionals in Fortune 500 enterprises rely on these tools. However, Indusface’s findings show that 11 percent of the data entered into AI tools is strictly confidential, such as internal business strategies.

Personal details, work-related files, client information, financial data, passwords, and intellectual property are among the most frequently shared forms of information. Indusface calls for better cybersecurity training to upskill employees on the safe use of AI and prevent breaches that could compromise individuals and businesses.

HRreview Logo

Get our essential weekday HR news and updates.

This field is for validation purposes and should be left unchanged.
Keep up with the latest in HR...
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

 

Work-Related Files and Confidential Data

Work-related files and documents are one of the most commonly shared types of data with AI tools. Professionals often upload internal business files, including confidential strategies, into generative AI platforms. Indusface’s research shows that many users are unaware of how these platforms process or store this data, which may be used to train future AI models.

The report recommends that employees remove any sensitive details when entering data into AI tools to minimise the risk of unintentional exposure. This is particularly important given the increasing reliance on AI in high-stakes environments like large enterprises.

Personal and Client Information

Personal data, such as names, addresses, and contact details, is also frequently shared with AI platforms. The study revealed that 30 percent of professionals believe protecting their personal data is not worth the effort.

Client and employee information, which often falls under strict regulatory requirements, is also being entered into AI systems. Business leaders should exercise caution when using AI for tasks involving payroll, performance reviews, or sensitive client data. Breaches involving these types of information could lead to regulatory violations, legal action, or significant reputational harm.

Financial Data Vulnerabilities

Financial information is another area of concern. Many professionals rely on large language models (LLMs) for tasks such as generating financial analyses or handling customer data. These models are often trained using data scraped from the web, which can include personally identifiable information (PII) obtained without users’ consent.

Indusface advises organisations to ensure that devices interacting with AI systems are secure and equipped with up-to-date antivirus protection. This precaution can help safeguard sensitive financial data before it is shared with AI platforms.

Sharing Passwords and Access Credentials

The study also highlights the dangers of sharing passwords and access credentials with AI platforms. Many professionals mistakenly rely on AI for insights or assistance without considering the risks to their accounts. Indusface emphasises the importance of using strong, unique passwords and enabling two-factor authentication to prevent unauthorised access.

As AI systems are not designed to securely store passwords, organisations must educate their employees about safe password practices to avoid compromising multiple accounts.

Intellectual Property and Codebase Security

Developers are increasingly turning to AI tools for coding assistance, but this practice poses significant risks to company intellectual property. If proprietary source code is entered into an AI platform, it could be stored or used to train future AI models. This raises concerns about the potential exposure of trade secrets and other sensitive business information.

Organisations are urged to establish clear guidelines for developers and employees when using AI platforms, ensuring that intellectual property is not inadvertently shared or stored externally.

As AI platforms become more integrated into workplace processes, the risks associated with their use are becoming more apparent. By implementing robust cybersecurity protocols and educating employees on safe practices, organisations can harness the benefits of AI tools while safeguarding sensitive information.

Alessandra Pacelli is a journalist and author contributing to HRreview, where she covers topics including labour market trends, employment costs, and workplace issues.

Latest news

Sustainable business starts with people, not HR policies

Why long-term success depends on supporting employees, not just meeting ESG targets, with practical steps for leaders to build healthier organisations.

Hiring steadies but Gulf crisis threatens recovery in UK jobs market

UK hiring shows signs of stabilising, but rising global uncertainty linked to the Gulf crisis is weighing on employer confidence and delaying recovery.

Women ‘face career setback’ risk with flexible working

Female staff using remote or reduced-hour arrangements more likely to move into lower-status roles, raising concerns about bias in career progression.

Jo Kansagra: Make work benefits work for Gen Z

Gen Z employees are entering the workforce at full steam, and yet many workplace benefits schemes are firmly stuck in the past.
- Advertisement -

Union access plans risk straining workplace relations, CIPD warns

Proposed rules on workplace access raise concerns about employer readiness and operational strain.

Petra Wilton on managers struggling with new workplace laws

“Managers are not being given the tools they need to fully understand how the rules of the workplace are changing.”

Must read

Poppy Jaman: Why employers should consider investing in mental health

Mental ill health in the workplace is a growing issue with one in six working age adults experiencing depression, anxiety or stress-related issues at any one time. With World Mental Health Day on 10th October, Mental Health First Aid (MHFA) England is calling on employers to find out how they can support the mental wellbeing of their staff.

Deborah Rees: From the academy to the first team; lessons in business and reward from elite sport

From the junior academy through the reserves to the first team, and from base pay through bonuses, long term plans, recognition and those non-financial incentives, this article will look at the parallels and necessary steps that reward, talent and senior management will have to take in order to realise the same benefits in the wider commercial world.
- Advertisement -

You might also likeRELATED
Recommended to you