Virginia Holden: Why C-suite leaders are misusing AI – and how it’s putting businesses at risk

-

What the board does not fully acknowledge is that it relied on AI-generated analysis produced under time pressure. A regulatory interpretation is wrong. A market projection blends verified data with probabilistic inference. A cited precedent does not exist.

The board approves the deal.

Capital is allocated. Public disclosures are signed. Integration begins. Six months later, the regulatory assumption collapses. The error was not malicious or obvious. It entered during analysis, was accepted at approval, and compounded as it moved through the organisation.

HRreview Logo

Get our essential weekday HR news and updates.

This field is for validation purposes and should be left unchanged.
Keep up with the latest in HR...
This field is hidden when viewing the form
This field is hidden when viewing the form
Optin_date
This field is hidden when viewing the form

 

When AI Errors Become Structural

What makes this moment different is not simply bias but compounding error. At board level, AI-informed decisions shape capital allocation, regulatory positioning, and investor communication. If flawed outputs are embedded in those decisions, the distortion becomes structural. Unlike operational mistakes, these choices are hard to unwind. In a high-velocity AI environment, one unchallenged assumption can cascade through strategy and incentives.

AI at the operational level creates contained risk. AI at the decision apex changes the architecture of the firm.

Under sustained market pressure quarterly reporting, activist scrutiny, compressed timelines the pull towards speed grows stronger. Executive surveys from IBM, PwC and McKinsey show rapid C-suite experimentation with generative AI, often ahead of governance maturity. The issue is not experimentation. It is whether boards recognise that a probabilistic system now influences their highest-impact decisions.

Governance Responsibility at Board Level

Current AI policies largely focus downward: staff misuse, data leakage, unauthorised tools. Yet accountability under the EU AI Act, GDPR and UK governance frameworks sits with leadership.

The EU AI Act requires risk management, logging, and human oversight for high-risk systems, with fines of up to €35 million or 7% of global turnover. GDPR Article 22 restricts automated decision-making and demands transparency. In the UK, boards remain accountable under UK GDPR, the Corporate Governance Code and, in financial services, the Senior Managers & Certification Regime.

Oversight is therefore a board responsibility. Yet governance structures still assume mistakes originate lower in the organisation. Monitoring flows downward. Executive judgement is presumed sound. That assumption is increasingly fragile.

Performance Pressure and Cognitive Shortcuts

Public companies operate under constant performance pressure. Compensation is often tied to short-term metrics. When survival is measured quarter by quarter, behaviours that optimise visible results are rewarded, even if they weaken long-term resilience.

Generative AI fits this environment perfectly. It produces rapid synthesis and confident language at speed. Under stress, cognitive bandwidth narrows and reliance on shortcuts increases. Fluent outputs feel credible. Coherence is mistaken for accuracy. AI becomes a cognitive shortcut.

The risk is not use, but opacity. Informal executive use in board papers, risk analysis or investor communications often leaves no audit trail. Prompts are not recorded. Assumptions are not flagged. Outputs are not independently checked.

If AI materially influences decisions without transparency, organisations may struggle to demonstrate compliance with oversight obligations. Ignorance will not protect them. There is also a cultural issue. Boards are rarely scrutinised in the way employees are. Challenging senior leaders carries risk. Traditional governance assumed authority equalled judgement. Probabilistic AI undermines that assumption by producing confidence without certainty.

From a competitive perspective, if AI is used to stabilise short-term narratives rather than strengthen long-term capability, fragility rises. An organisation becomes fragile when the shocks it creates exceed its capacity to absorb them. AI increases the speed and scale of those shocks. Without adaptation, competitiveness weakens.

What Boards Should Do Next

So what should boards do?

Treat executive AI use as a governance matter and competitive advantage, not just a productivity tool. Disclose when AI has shaped strategic materials. Keep simple audit trails. Record prompts. Separate fact from inference. Verify critical claims. Align incentives with long-term resilience, not only quarterly optics. Train leaders in cognitive risk, not only regulation. Bias matters.

Structure AI usage. Tools such as AnnIQ can reduce workload while keeping decisions transparent by showing sources, flagging assumptions and creating a clear record. Speed must not mean reduced accountability.

Finally, ensure challenge flows upward as well as downward. Independent review of AI-influenced decisions should be normal practice. Regulation assumes senior accountability. But accountability requires visibility, and visibility requires boards to include themselves within the control system however much they feel they know better.

The real governance question is whether boards are prepared to govern their own use of it, deliver AI tools that provide competitive advantage and deploy them well.

Chief Marketing Officer at 

Gini Holden leads brand, narrative and go-to-market strategy for Anni, the company’s proprietary AI platform, designed to reduce complexity and cognitive load by replacing fragmented marketing activity with a single, coherent system.

Gini’s work is shaped by a long-standing focus on how humans actually make decisions, and how culture, systems and incentives shape behaviour at scale. Her background spans academia, senior commercial leadership and large-scale systems design, with a consistent emphasis on reducing friction and redesigning decision environments to support better judgement rather than overwhelm.

Latest news

Alison Lucas & Lizzie Bentley Bowers: Why your offboarding process is as vital as onboarding

We know that beginnings shape performance and culture, so we take time to get them right. Endings are often rushed, avoided or delegated to process.

Reward gaps leave part-time and public sector staff ‘at disadvantage’

Unequal access to staff perks leaves part-time and public sector workers less recognised despite strong links between incentives and engagement.

Workplace workouts: simple ways to move more at your desk and boost health and productivity

Long periods at a desk can affect energy, concentration and physical comfort. Claire Small explains how regular movement during the working day can support wellbeing.

Government warned over youth jobs gap after King’s Speech

Ministers face calls for clearer action on youth employment as almost one million young people remain outside education, work or training.
- Advertisement -

UK ‘passes 8 million mental health sick days’ as anxiety and burnout hit younger workers

Anxiety, depression and burnout are driving millions of lost working days as employers face growing calls to improve mental health support.

Employers face growing duty of care pressures as business travel costs surge

Employers are under growing pressure to protect travelling staff as geopolitical instability, rising costs and disruption reshape business travel.

Must read

Dave Mendoza: Futurecasting – map, standardize, & segment your talent organisation’s data IP

Futurecasting: Map, standardize, & segment your talent organisation’s data...

Matt Stephens: How to support Gen-Z staff who are working remotely

"For Gen-Z, an incredibly digitally adept generation, remote working doesn’t have to be isolating, as long as their employers understand the right strategies to keep them engaged."
- Advertisement -

You might also likeRELATED
Recommended to you