<

!Google ads have two elements of code. This is the 'header' code. There will be another short tag of code that is placed whereever you want the ads to appear. These tags are generated in the Google DFP ad manager. Go to Ad Units = Tags. If you update the code, you need to replace both elements.> <! Prime Home Page Banner (usually shows to right of logo) It's managed in the Extra Theme Options section*> <! 728x90_1_home_hrreview - This can be turned off if needed - it shows at the top of the content, but under the header menu. It's managed in the Extra Theme Options section * > <! 728x90_2_home_hrreview - shows in the main homepage content section. Might be 1st or 2nd ad depending if the one above is turned off. Managed from the home page layout* > <! 728x90_3_home_hrreview - shows in the main homepage content section. Might be 2nd or 3rd ad depending if the one above is turned off. Managed from the home page layout* > <! Footer - 970x250_large_footerboard_hrreview. It's managed in the Extra Theme Options section* > <! MPU1 - It's managed in the Widgets-sidebar section* > <! MPU2 - It's managed in the Widgets-sidebar section* > <! MPU - It's managed in the Widgets-sidebar section3* > <! MPU4 - It's managed in the Widgets-sidebar section* > <! Sidebar_large_1 - It's managed in the Widgets-sidebar section* > <! Sidebar_large_2 - It's managed in the Widgets-sidebar section* > <! Sidebar_large_3 - It's managed in the Widgets-sidebar section* > <! Sidebar_large_4 - It's managed in the Widgets-sidebar section* > <! Sidebar_large_5 are not currently being used - It's managed in the Widgets-sidebar section* > <! Bombora simple version of script - not inlcuding Google Analytics code* >

Security concerns as professionals share confidential data with AI platforms

-

A recent study by application security SaaS company Indusface found that nearly 2 in 5 professionals surveyed (38%) have shared confidential data with AI platforms without their employer’s permission.

This raises concerns about data security, as the storage and handling of such information by AI tools remain unclear.

AI platforms like ChatGPT are widely used in workplaces to assist with tasks such as analysing data, refining reports, and drafting presentations. Over 80 percent of professionals in Fortune 500 enterprises rely on these tools. However, Indusface’s findings show that 11 percent of the data entered into AI tools is strictly confidential, such as internal business strategies.

Personal details, work-related files, client information, financial data, passwords, and intellectual property are among the most frequently shared forms of information. Indusface calls for better cybersecurity training to upskill employees on the safe use of AI and prevent breaches that could compromise individuals and businesses.

Work-Related Files and Confidential Data

Work-related files and documents are one of the most commonly shared types of data with AI tools. Professionals often upload internal business files, including confidential strategies, into generative AI platforms. Indusface’s research shows that many users are unaware of how these platforms process or store this data, which may be used to train future AI models.

The report recommends that employees remove any sensitive details when entering data into AI tools to minimise the risk of unintentional exposure. This is particularly important given the increasing reliance on AI in high-stakes environments like large enterprises.

Personal and Client Information

Personal data, such as names, addresses, and contact details, is also frequently shared with AI platforms. The study revealed that 30 percent of professionals believe protecting their personal data is not worth the effort.

Client and employee information, which often falls under strict regulatory requirements, is also being entered into AI systems. Business leaders should exercise caution when using AI for tasks involving payroll, performance reviews, or sensitive client data. Breaches involving these types of information could lead to regulatory violations, legal action, or significant reputational harm.

Financial Data Vulnerabilities

Financial information is another area of concern. Many professionals rely on large language models (LLMs) for tasks such as generating financial analyses or handling customer data. These models are often trained using data scraped from the web, which can include personally identifiable information (PII) obtained without users’ consent.

Indusface advises organisations to ensure that devices interacting with AI systems are secure and equipped with up-to-date antivirus protection. This precaution can help safeguard sensitive financial data before it is shared with AI platforms.

Sharing Passwords and Access Credentials

The study also highlights the dangers of sharing passwords and access credentials with AI platforms. Many professionals mistakenly rely on AI for insights or assistance without considering the risks to their accounts. Indusface emphasises the importance of using strong, unique passwords and enabling two-factor authentication to prevent unauthorised access.

As AI systems are not designed to securely store passwords, organisations must educate their employees about safe password practices to avoid compromising multiple accounts.

Intellectual Property and Codebase Security

Developers are increasingly turning to AI tools for coding assistance, but this practice poses significant risks to company intellectual property. If proprietary source code is entered into an AI platform, it could be stored or used to train future AI models. This raises concerns about the potential exposure of trade secrets and other sensitive business information.

Organisations are urged to establish clear guidelines for developers and employees when using AI platforms, ensuring that intellectual property is not inadvertently shared or stored externally.

As AI platforms become more integrated into workplace processes, the risks associated with their use are becoming more apparent. By implementing robust cybersecurity protocols and educating employees on safe practices, organisations can harness the benefits of AI tools while safeguarding sensitive information.

Latest news

Turning Workforce Data into Real Insight: A practical session for HR leaders

HR teams are being asked to deliver greater impact with fewer resources. This practical session is designed to help you move beyond instinct and start using workforce data to make faster, smarter decisions that drive real business results.

Bethany Cann of Specsavers

A working day balancing early talent strategy, university partnerships and family life at the international opticians retailer.

Workplace silence leaving staff afraid to raise mistakes

Almost half of UK workers feel unable to raise concerns or mistakes at work, with new research warning that workplace silence is damaging productivity.

Managers’ biggest fears? ‘Confrontation and redundancies’

Survey of UK managers reveals fear of confrontation and redundancies, with many lacking training to handle difficult workplace situations.
- Advertisement -

Mike Bond: Redefining talent – and prioritising the creative mindset

Not too long ago, the most prized CVs boasted MBAs, consulting pedigrees and an impressive record of traditional experience. Now, things are different.

UK loses ground in global remote work rankings

Connectivity gaps across the UK risk weakening the country’s appeal to remote workers and internationally mobile talent.

Must read

Charlotte Mepham: Keeping you workforce engaged

If someone asks what are the main problems you...

Ben Reuveni: Leverage these three fields of technology to boost employee growth

AI, virtual reality and the cloud can all boost employee growth.
- Advertisement -

You might also likeRELATED
Recommended to you

Exit mobile version