Security concerns as professionals share confidential data with AI platforms

-

A recent study by application security SaaS company Indusface found that nearly 2 in 5 professionals surveyed (38%) have shared confidential data with AI platforms without their employer’s permission.

This raises concerns about data security, as the storage and handling of such information by AI tools remain unclear.

AI platforms like ChatGPT are widely used in workplaces to assist with tasks such as analysing data, refining reports, and drafting presentations. Over 80 percent of professionals in Fortune 500 enterprises rely on these tools. However, Indusface’s findings show that 11 percent of the data entered into AI tools is strictly confidential, such as internal business strategies.

Personal details, work-related files, client information, financial data, passwords, and intellectual property are among the most frequently shared forms of information. Indusface calls for better cybersecurity training to upskill employees on the safe use of AI and prevent breaches that could compromise individuals and businesses.

HRreview Logo

Get our essential weekday HR news and updates.

This field is for validation purposes and should be left unchanged.
Keep up with the latest in HR...
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

 

Work-Related Files and Confidential Data

Work-related files and documents are one of the most commonly shared types of data with AI tools. Professionals often upload internal business files, including confidential strategies, into generative AI platforms. Indusface’s research shows that many users are unaware of how these platforms process or store this data, which may be used to train future AI models.

The report recommends that employees remove any sensitive details when entering data into AI tools to minimise the risk of unintentional exposure. This is particularly important given the increasing reliance on AI in high-stakes environments like large enterprises.

Personal and Client Information

Personal data, such as names, addresses, and contact details, is also frequently shared with AI platforms. The study revealed that 30 percent of professionals believe protecting their personal data is not worth the effort.

Client and employee information, which often falls under strict regulatory requirements, is also being entered into AI systems. Business leaders should exercise caution when using AI for tasks involving payroll, performance reviews, or sensitive client data. Breaches involving these types of information could lead to regulatory violations, legal action, or significant reputational harm.

Financial Data Vulnerabilities

Financial information is another area of concern. Many professionals rely on large language models (LLMs) for tasks such as generating financial analyses or handling customer data. These models are often trained using data scraped from the web, which can include personally identifiable information (PII) obtained without users’ consent.

Indusface advises organisations to ensure that devices interacting with AI systems are secure and equipped with up-to-date antivirus protection. This precaution can help safeguard sensitive financial data before it is shared with AI platforms.

Sharing Passwords and Access Credentials

The study also highlights the dangers of sharing passwords and access credentials with AI platforms. Many professionals mistakenly rely on AI for insights or assistance without considering the risks to their accounts. Indusface emphasises the importance of using strong, unique passwords and enabling two-factor authentication to prevent unauthorised access.

As AI systems are not designed to securely store passwords, organisations must educate their employees about safe password practices to avoid compromising multiple accounts.

Intellectual Property and Codebase Security

Developers are increasingly turning to AI tools for coding assistance, but this practice poses significant risks to company intellectual property. If proprietary source code is entered into an AI platform, it could be stored or used to train future AI models. This raises concerns about the potential exposure of trade secrets and other sensitive business information.

Organisations are urged to establish clear guidelines for developers and employees when using AI platforms, ensuring that intellectual property is not inadvertently shared or stored externally.

As AI platforms become more integrated into workplace processes, the risks associated with their use are becoming more apparent. By implementing robust cybersecurity protocols and educating employees on safe practices, organisations can harness the benefits of AI tools while safeguarding sensitive information.

Alessandra Pacelli is a journalist and author contributing to HRreview, where she covers topics including labour market trends, employment costs, and workplace issues.

Latest news

Helen Wada: Why engagement initiatives fail without human-centric leadership

Workforce engagement has become a hot topic across the boardroom and beyond, particularly as hybrid working practices have become the norm.

Recruiters warned to move beyond ‘post and pray’ as passive talent overlooked

Employers risk missing most candidates by relying on job boards as hiring methods struggle to deliver quality applicants.

Employment tribunal roundup: Appeal fairness, dismissal reasoning, discrimination tests and religious belief clarified

Decisions examine appeal failures, dismissal reasoning, discrimination claims and religious belief, offering practical guidance on fairness, causation and proportionality.

Fears of AI cheating in hiring ‘overblown’ as employers urged to rethink assessments

Employers may be overstating concerns about AI misuse in recruitment as evidence of candidate manipulation remains limited.
- Advertisement -

More employees use workplace health benefits, but barriers still limit access

Many workers struggle to access employer healthcare support due to confusion, costs and unclear processes.

Gender pay gap in tech widens to nine-year high as AI roles drive salaries

Women in IT earn less as salaries rise faster in male-dominated AI and cybersecurity roles, widening pay differences.

Must read

Neil Pickering: Generational tensions – The Ageing Workforce vs. Generation Y

It was interesting to read KPMG’s recent report which...

Vanessa Judelman: Five key steps to giving tough feedback

It’s easy to sit down with a colleague and...
- Advertisement -

You might also likeRELATED
Recommended to you