Shadow adoption of AI – where employees use AI tools like ChatGPT without disclosing it – can benefit employees professionally but causes their firms to struggle with trust and accountability issues, according to new research.
The research, led by Professor David Restrepo Amariles from HEC Paris Business School, noted challenges in the adoption of AI tools in consulting firms. It found that content produced with AI assistance was rated more favourably by managers. However, when employees disclosed their AI use, the effort behind their work was often undervalued.
Analysts who concealed their use of AI tended to receive more positive evaluations, raising concerns about fairness and oversight.
Managers also found it difficult to determine when AI tools had been used unless they were explicitly informed. Even when AI use was not disclosed, 44 percent of managers suspected that AI had been involved. This trust gap creates a misalignment in accountability, with employees benefiting from shadow adoption of AI while managers misjudge the effort behind the work.
The need for AI policies and oversight
The research suggests that firms should establish clear policies on AI use to address these challenges. It recommends mandatory disclosure of AI tools, a framework for risk-sharing between managers and employees and mechanisms for monitoring AI usage. The findings indicate that structured policies are necessary for fair evaluations and to maintain trust between employees and management.
Professor Restrepo commented, “Our research demonstrates that AI adoption in consulting firms depends not only on technological capabilities but also on managerial experience and structured policy frameworks. Successful integration of AI tools like ChatGPT requires not only transparency but also fair recognition of human effort and well-balanced incentives.”
The risks of AI data exposure in the workplace
The undisclosed use of AI tools in the absence of clear AI policies poses a security risk as well. Jared Siddle, VP of Risk & Compliance at risk management company Protecht, advises employees not to enter confidential business data into AI tools unless approved by their organisation’s risk management team.
“If you wouldn’t post it publicly, don’t put it into an AI tool. AI tools don’t have perfect memories, but they do process and retain data for training and moderation. If an AI platform is compromised or misused, that data could become an easy target for cybercriminals,” he said.
A study by TELUS Digital found that 57 percent of enterprise employees admit to entering high-risk information into publicly available generative AI assistants.
“AI security training isn’t optional, it’s essential. AI is becoming a daily tool for many employees, but without proper guidance, a quick query can turn into a costly data breach,” Siddle added.
The importance of AI governance in HR and risk management
With AI becoming increasingly embedded in workplace operations, HR and risk management teams must take a proactive role in ensuring responsible AI use. A lack of clear policies and training can lead to security breaches and unfair performance evaluations.
Siddle warns that human error is the cause behind 74 percent of cybersecurity breaches, often due to a lack of awareness about the risks involved. He urges office workers to think carefully before using AI tools.
“Confidential data doesn’t belong in chatbots. Check the terms, stick to approved AI tools and don’t trust AI blindly. AI is a workplace tool, not a toy. Treat it like any other software that interacts with sensitive data,” he concluded.