A recent study by application security SaaS company Indusface found that nearly 2 in 5 professionals surveyed (38%) have shared confidential data with AI platforms without their employer’s permission.

This raises concerns about data security, as the storage and handling of such information by AI tools remain unclear.

AI platforms like ChatGPT are widely used in workplaces to assist with tasks such as analysing data, refining reports, and drafting presentations. Over 80 percent of professionals in Fortune 500 enterprises rely on these tools. However, Indusface’s findings show that 11 percent of the data entered into AI tools is strictly confidential, such as internal business strategies.

Personal details, work-related files, client information, financial data, passwords, and intellectual property are among the most frequently shared forms of information. Indusface calls for better cybersecurity training to upskill employees on the safe use of AI and prevent breaches that could compromise individuals and businesses.

Work-Related Files and Confidential Data

Work-related files and documents are one of the most commonly shared types of data with AI tools. Professionals often upload internal business files, including confidential strategies, into generative AI platforms. Indusface’s research shows that many users are unaware of how these platforms process or store this data, which may be used to train future AI models.

The report recommends that employees remove any sensitive details when entering data into AI tools to minimise the risk of unintentional exposure. This is particularly important given the increasing reliance on AI in high-stakes environments like large enterprises.

Personal and Client Information

Personal data, such as names, addresses, and contact details, is also frequently shared with AI platforms. The study revealed that 30 percent of professionals believe protecting their personal data is not worth the effort.

Client and employee information, which often falls under strict regulatory requirements, is also being entered into AI systems. Business leaders should exercise caution when using AI for tasks involving payroll, performance reviews, or sensitive client data. Breaches involving these types of information could lead to regulatory violations, legal action, or significant reputational harm.

Financial Data Vulnerabilities

Financial information is another area of concern. Many professionals rely on large language models (LLMs) for tasks such as generating financial analyses or handling customer data. These models are often trained using data scraped from the web, which can include personally identifiable information (PII) obtained without users’ consent.

Indusface advises organisations to ensure that devices interacting with AI systems are secure and equipped with up-to-date antivirus protection. This precaution can help safeguard sensitive financial data before it is shared with AI platforms.

Sharing Passwords and Access Credentials

The study also highlights the dangers of sharing passwords and access credentials with AI platforms. Many professionals mistakenly rely on AI for insights or assistance without considering the risks to their accounts. Indusface emphasises the importance of using strong, unique passwords and enabling two-factor authentication to prevent unauthorised access.

As AI systems are not designed to securely store passwords, organisations must educate their employees about safe password practices to avoid compromising multiple accounts.

Intellectual Property and Codebase Security

Developers are increasingly turning to AI tools for coding assistance, but this practice poses significant risks to company intellectual property. If proprietary source code is entered into an AI platform, it could be stored or used to train future AI models. This raises concerns about the potential exposure of trade secrets and other sensitive business information.

Organisations are urged to establish clear guidelines for developers and employees when using AI platforms, ensuring that intellectual property is not inadvertently shared or stored externally.

As AI platforms become more integrated into workplace processes, the risks associated with their use are becoming more apparent. By implementing robust cybersecurity protocols and educating employees on safe practices, organisations can harness the benefits of AI tools while safeguarding sensitive information.