Shadow adoption of AI ‘creates accountability challenges’ in consulting firms

-

The research, led by Professor David Restrepo Amariles from HEC Paris Business School, noted challenges in the adoption of AI tools in consulting firms. It found that content produced with AI assistance was rated more favourably by managers. However, when employees disclosed their AI use, the effort behind their work was often undervalued.

Analysts who concealed their use of AI tended to receive more positive evaluations, raising concerns about fairness and oversight.

Managers also found it difficult to determine when AI tools had been used unless they were explicitly informed. Even when AI use was not disclosed, 44 percent of managers suspected that AI had been involved. This trust gap creates a misalignment in accountability, with employees benefiting from shadow adoption of AI while managers misjudge the effort behind the work.

HRreview Logo

Get our essential weekday HR news and updates.

This field is for validation purposes and should be left unchanged.
Keep up with the latest in HR...
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

 

The need for AI policies and oversight

The research suggests that firms should establish clear policies on AI use to address these challenges. It recommends mandatory disclosure of AI tools, a framework for risk-sharing between managers and employees and mechanisms for monitoring AI usage. The findings indicate that structured policies are necessary for fair evaluations and to maintain trust between employees and management.

Professor Restrepo commented, “Our research demonstrates that AI adoption in consulting firms depends not only on technological capabilities but also on managerial experience and structured policy frameworks. Successful integration of AI tools like ChatGPT requires not only transparency but also fair recognition of human effort and well-balanced incentives.”

The risks of AI data exposure in the workplace

The undisclosed use of AI tools in the absence of clear AI policies poses a security risk as well. Jared Siddle, VP of Risk & Compliance at risk management company Protecht, advises employees not to enter confidential business data into AI tools unless approved by their organisation’s risk management team.

“If you wouldn’t post it publicly, don’t put it into an AI tool. AI tools don’t have perfect memories, but they do process and retain data for training and moderation. If an AI platform is compromised or misused, that data could become an easy target for cybercriminals,” he said.

A study by TELUS Digital found that 57 percent of enterprise employees admit to entering high-risk information into publicly available generative AI assistants.

“AI security training isn’t optional, it’s essential. AI is becoming a daily tool for many employees, but without proper guidance, a quick query can turn into a costly data breach,” Siddle added.

The importance of AI governance in HR and risk management

With AI becoming increasingly embedded in workplace operations, HR and risk management teams must take a proactive role in ensuring responsible AI use. A lack of clear policies and training can lead to security breaches and unfair performance evaluations.

Siddle warns that human error is the cause behind 74 percent of cybersecurity breaches, often due to a lack of awareness about the risks involved. He urges office workers to think carefully before using AI tools.

“Confidential data doesn’t belong in chatbots. Check the terms, stick to approved AI tools and don’t trust AI blindly. AI is a workplace tool, not a toy. Treat it like any other software that interacts with sensitive data,” he concluded.

Alessandra Pacelli is a journalist and author contributing to HRreview, where she covers topics including labour market trends, employment costs, and workplace issues.

Latest news

Helen Wada: Why engagement initiatives fail without human-centric leadership

Workforce engagement has become a hot topic across the boardroom and beyond, particularly as hybrid working practices have become the norm.

Recruiters warned to move beyond ‘post and pray’ as passive talent overlooked

Employers risk missing most candidates by relying on job boards as hiring methods struggle to deliver quality applicants.

Employment tribunal roundup: Appeal fairness, dismissal reasoning, discrimination tests and religious belief clarified

Decisions examine appeal failures, dismissal reasoning, discrimination claims and religious belief, offering practical guidance on fairness, causation and proportionality.

Fears of AI cheating in hiring ‘overblown’ as employers urged to rethink assessments

Employers may be overstating concerns about AI misuse in recruitment as evidence of candidate manipulation remains limited.
- Advertisement -

More employees use workplace health benefits, but barriers still limit access

Many workers struggle to access employer healthcare support due to confusion, costs and unclear processes.

Gender pay gap in tech widens to nine-year high as AI roles drive salaries

Women in IT earn less as salaries rise faster in male-dominated AI and cybersecurity roles, widening pay differences.

Must read

Dr. Lynda Shaw: Motivate me or I’m changing job

The psychological force of employee motivation will not only determine the direction of a person's behaviour in an organisation, their effort and their persistence, but its impact on the business as a whole.

Jeremy Snape: Bouncing back from setbacks

A second chance can be rare, so it is critical to have the right mindset, says Jeremy Snape. Every high performer experiences painful setbacks during their career.
- Advertisement -

You might also likeRELATED
Recommended to you