AI is changing how work is done across organisations, but rolling it out well is as much a people challenge as a technology one. Recent conversations about AI’s implications for jobs, skills and ways of working often reveal a consistent pattern: women are exploring interesting and problem-solving conversations about AI’s implication and the ways in which it can enhance their work, yet AI adoption rates among women lag.
This is not about capability, but about biased systems, workplace perceptions, and the additional scrutiny many women face in professional settings.
While women often face additional barriers, it’s important to recognise that AI confidence and trust gaps can affect under-represented groups more broadly, including by role, age, or function. For HR leaders, that context is where action is required. HR and people teams shape hiring, learning, governance and culture, all the levers that determine whether AI becomes a fairness accelerator or an inequality multiplier.
Why thoughtful adoption matters
Conversations with HR and technology professionals often surface two recurring themes: concern about bias in AI systems, for example hiring tools trained on old data that mirrors and perpetuates past inequalities, and worry that using AI may seem like a shortcut that undermines real expertise.
Studies indicate the proportion of women adopting AI tools can be 10-40% smaller than that of men, but female leaders are more likely to recognise AI’s transformative potential. That disparity suggests the issue isn’t about interest or skill, but about trust and risk.
When individuals, particularly those whose credibility is already under scrutiny, do not trust how tools were built or how they will be judged for using them, adoption slows. Slower adoption means fewer different voices shaping how AI is used, leaving the decisions to the same groups of people who already have the most influence in technology and strategy.
The HR opportunity
HR is not merely a user of AI. HR must act as a steward of how AI touches people and processes across the organisation. This stewardship means creating clear rules for managing AI, making sure systems are transparent, checked for bias, and that vendors are held accountable. Equally important is learning tailored to each role, showing how AI can help across recruitment, people management and learning & development, rather than using the same training for everyone.
Culture and change management also sit squarely with HR. Organisations must create safe spaces for experimentation where mistakes are treated as learning opportunities and where clear support from senior leaders makes using AI normal and accepted.
Finally, measurement and accountability are essential. HR should monitor AI adoption trends and outcomes in aggregate, for example, by gender or job level, to identify and address potential inequities early.
Practical steps HR leaders can implement
To put ideas into action, start with small pilot projects that include employees from different roles, a mix of genders, and structured feedback that helps improve bigger plans.
Learning programmes should be practical, with short sessions that show where AI adds value and where human judgement needs to have the final decision. Mentorship should be formalised, pairing early AI users with more hesitant coworkers, and leaders should ensure that under-represented employees can take part in pilot projects. s
Transparency in use is another critical step. Organisations should make it clear when AI affects decisions and measure the impact, not just how much AI is being used. Cultural change happens faster when senior leaders lead by example. Those in influential positions, who are often men, should actively support fairness checks and ensure diverse voices are part of AI decision-making groups.
Policies and checklists are necessary but not sufficient. Changing culture by showing transparent decisions, rewarding mentorship, treating experimenting with AI as a duty, not a risk is how we convert policy into everyday practice. Highlight stories of responsible use, recognise employees who surface problems, and include AI skills in leadership development programmes. These behaviours embed empathetic and ethical AI into an organisation.
Inclusion as a strategy
AI has the technical potential to reduce bias, from anonymised CV screening to analysing pay gaps or highlighting uneven participation in meetings. Yet tools alone cannot shift culture. HR and leadership must create the environment where these tools are applied responsibly.
By providing safe spaces for experimentation, improving oversight, supporting mentorship across genders and levels, and measuring results openly, HR can convert AI hesitation into leadership.
When HR leads the way on fair AI adoption, it doesn’t just future-proof the workforce, it models what responsible innovation looks like. The goal is not only better performance but fairer outcomes, ensuring AI benefits everyone in the organisation.
Yuan Deng is a Senior Product Leader and Speaker with over 10 years of experience scaling AI-driven SaaS products, delivering 200%+ ROI growth, and leading cross-functional teams across the US, UK, AU, and CA markets.
Throughout her career, she has founded and scaled a startup, achieving 10–60% YoY revenue growth across multiple international markets. She also drove transformative results at Capital One, including 200% ROI growth on a core acquisition product and scaling infrastructure for over 2M users.
Her background combines deep technical expertise (AWS, Azure Databricks, Agile, Power BI) with a strong commercial mindset, specializing in aligning product strategy with business objectives, driving go-to-market success, and scaling high-performing teams. Alessandra is passionate about building data-driven products that unlock new market opportunities and delivering measurable business impact through product innovation and strategic leadership.
