HRreview 20 Years
This field is for validation purposes and should be left unchanged.
Weekday HR updates. Unsubscribe anytime.
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

Virginia Holden: Why C-suite leaders are misusing AI – and how it’s putting businesses at risk

-

What the board does not fully acknowledge is that it relied on AI-generated analysis produced under time pressure. A regulatory interpretation is wrong. A market projection blends verified data with probabilistic inference. A cited precedent does not exist.

The board approves the deal.

Capital is allocated. Public disclosures are signed. Integration begins. Six months later, the regulatory assumption collapses. The error was not malicious or obvious. It entered during analysis, was accepted at approval, and compounded as it moved through the organisation.

 

HRreview Logo

Get our essential daily HR news and updates.

This field is for validation purposes and should be left unchanged.
Weekday HR updates. Unsubscribe anytime.
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

 

 

When AI Errors Become Structural

What makes this moment different is not simply bias but compounding error. At board level, AI-informed decisions shape capital allocation, regulatory positioning, and investor communication. If flawed outputs are embedded in those decisions, the distortion becomes structural. Unlike operational mistakes, these choices are hard to unwind. In a high-velocity AI environment, one unchallenged assumption can cascade through strategy and incentives.

AI at the operational level creates contained risk. AI at the decision apex changes the architecture of the firm.

Under sustained market pressure quarterly reporting, activist scrutiny, compressed timelines the pull towards speed grows stronger. Executive surveys from IBM, PwC and McKinsey show rapid C-suite experimentation with generative AI, often ahead of governance maturity. The issue is not experimentation. It is whether boards recognise that a probabilistic system now influences their highest-impact decisions.

Governance Responsibility at Board Level

Current AI policies largely focus downward: staff misuse, data leakage, unauthorised tools. Yet accountability under the EU AI Act, GDPR and UK governance frameworks sits with leadership.

The EU AI Act requires risk management, logging, and human oversight for high-risk systems, with fines of up to €35 million or 7% of global turnover. GDPR Article 22 restricts automated decision-making and demands transparency. In the UK, boards remain accountable under UK GDPR, the Corporate Governance Code and, in financial services, the Senior Managers & Certification Regime.

Oversight is therefore a board responsibility. Yet governance structures still assume mistakes originate lower in the organisation. Monitoring flows downward. Executive judgement is presumed sound. That assumption is increasingly fragile.

Performance Pressure and Cognitive Shortcuts

Public companies operate under constant performance pressure. Compensation is often tied to short-term metrics. When survival is measured quarter by quarter, behaviours that optimise visible results are rewarded, even if they weaken long-term resilience.

Generative AI fits this environment perfectly. It produces rapid synthesis and confident language at speed. Under stress, cognitive bandwidth narrows and reliance on shortcuts increases. Fluent outputs feel credible. Coherence is mistaken for accuracy. AI becomes a cognitive shortcut.

The risk is not use, but opacity. Informal executive use in board papers, risk analysis or investor communications often leaves no audit trail. Prompts are not recorded. Assumptions are not flagged. Outputs are not independently checked.

If AI materially influences decisions without transparency, organisations may struggle to demonstrate compliance with oversight obligations. Ignorance will not protect them. There is also a cultural issue. Boards are rarely scrutinised in the way employees are. Challenging senior leaders carries risk. Traditional governance assumed authority equalled judgement. Probabilistic AI undermines that assumption by producing confidence without certainty.

From a competitive perspective, if AI is used to stabilise short-term narratives rather than strengthen long-term capability, fragility rises. An organisation becomes fragile when the shocks it creates exceed its capacity to absorb them. AI increases the speed and scale of those shocks. Without adaptation, competitiveness weakens.

What Boards Should Do Next

So what should boards do?

Treat executive AI use as a governance matter and competitive advantage, not just a productivity tool. Disclose when AI has shaped strategic materials. Keep simple audit trails. Record prompts. Separate fact from inference. Verify critical claims. Align incentives with long-term resilience, not only quarterly optics. Train leaders in cognitive risk, not only regulation. Bias matters.

Structure AI usage. Tools such as AnnIQ can reduce workload while keeping decisions transparent by showing sources, flagging assumptions and creating a clear record. Speed must not mean reduced accountability.

Finally, ensure challenge flows upward as well as downward. Independent review of AI-influenced decisions should be normal practice. Regulation assumes senior accountability. But accountability requires visibility, and visibility requires boards to include themselves within the control system however much they feel they know better.

The real governance question is whether boards are prepared to govern their own use of it, deliver AI tools that provide competitive advantage and deploy them well.

Chief Marketing Officer at 

Gini Holden leads brand, narrative and go-to-market strategy for Anni, the company’s proprietary AI platform, designed to reduce complexity and cognitive load by replacing fragmented marketing activity with a single, coherent system.

Gini’s work is shaped by a long-standing focus on how humans actually make decisions, and how culture, systems and incentives shape behaviour at scale. Her background spans academia, senior commercial leadership and large-scale systems design, with a consistent emphasis on reducing friction and redesigning decision environments to support better judgement rather than overwhelm.

Latest news

CIPD Insight: Employers should review entire employee lifecycle under Employment Rights Act

Ben Willmott explains why employers should review HR policies, hiring practices and people management capability as new employment laws take effect.

Meta eyes cuts of up to 20 percent as AI drive reshapes workforce

Meta is weighing major workforce cuts as artificial intelligence reshapes roles, with HR leaders urged to plan for automation-led change.

Most organisations lack a strategy to communicate workplace change: report

Many organisations are introducing workplace changes without clear communication plans, while growing message overload is increasing burnout.

Millions of workers affected by ‘secondhand stress’ from colleagues

Workplace stress is spreading between colleagues, with millions affected by pressure they are not directly responsible for.
- Advertisement -

Cancer, mental health and musculoskeletal disorders account for ‘half of employee health referrals’

New referral data shows the health issues most frequently affecting workers, with long-term illness and psychological conditions dominating support needs.

How to assess if your workplace pension offers Value for Money

Your workplace pension is one of your biggest investments after pay, but is it delivering real value? We break down how you can use the Value for Money Framework and other practical tips to ensure your scheme drives stronger outcomes for your people.

Must read

How can HR professionals demonstrate the strategic value they provide to a company? A Q&A with Annabel Jones – HR Director at ADP UK

HR plays a key role in enabling organisations to meet their strategic goals. It helps the wider business understand what its employees value, and what makes them productive, and keeps them engaged. How can HR professionals demonstrate the strategic value they provide to the company?

Nii Cleland: Workplace racial equity: what’s changed since May 2020?

Nii Cleland explores why there has been such slow progress improving racial equity within organisations.
- Advertisement -

You might also likeRELATED
Recommended to you