We know that organisational bias remains a significant challenge within UK workplaces. Around 30% of employees in the UK have reported experiencing or witnessing bias during their careers, with gender bias leading the way, closely followed by ageism and racial bias.
Despite years of efforts to promote equality and diversity, these issues remain entrenched, particularly in professional services industries like finance and law.
In a bid to address inefficiencies and streamline recruitment processes, organisations are increasingly turning to Artificial Intelligence (AI). While AI holds the promise of speed and objectivity, as someone with over 25 years in recruitment who educates leaders on inclusive hiring practices, I am concerned that it also poses a serious risk in perpetuating or exacerbating existing biases which are embedded in workplace culture.
Pitfalls of AI Models
AI systems used in recruitment are typically trained on historical data collected from an organisation’s previous hiring practices. However, If AI systems are trained on biased data or poorly designed algorithms, they will perpetuate bias in hiring practices. This is particularly worrying in industries where we at Diverse Talent Networks spend a lot of our time, like law and finance, where the lack of diversity in senior leadership reflects decades of unequal access to opportunities. One glaring example of AI bias occurred in a major tech e-commerce company.
The firm developed an AI-driven hiring tool that unintentionally discriminated against women. The system, trained on 10 years of resumes, penalised applications containing words associated with women, such as “women’s chess club,” simply because most of the company’s previous hires were men. Similarly, researchers at Carnegie Mellon University discovered that Google’s AI-driven ad platform displayed higher-paying job adverts more frequently to men than to women, reinforcing pre-existing income disparities.
While these organisations acted swiftly to rectify their errors, the incidents serve as cautionary tales of what can happen when human oversight is removed from the recruitment process.
The Human Factor
Maintaining human involvement in recruitment is essential, particularly at the outset. These examples I have cited highlight the critical need for real people to be involved in recruitment from the off. If you rely solely on AI in your recruitment strategies, you will be at a distinct disadvantage to your competitors who use other forms of finding talent, like networking, for example. AI may excel at sifting through vast amounts of data quickly and identifying candidates who match certain qualifications.
However, it struggles to account for the nuance and context that humans bring to decision-making. For instance, an AI system might favour candidates from a specific university because previous hires from that institution performed well, but this practice could exclude talented individuals from less traditional backgrounds. AI is also inherently ill-equipped to evaluate the potential of candidates who don’t fit a conventional mold.
For example, someone who took a career break to raise a family or transitioned into a new field might be overlooked by an algorithm trained to favour continuous, linear career progression.
Minimising Bias
Building a diverse recruitment strategy is key. The risks of AI bias mean companies must develop comprehensive, diverse recruitment strategies. This requires organisations to take deliberate steps to ensure all candidates have equal opportunities to succeed. It’s about more than just filling a quota. It’s about recognising the value that different perspectives and experiences bring to an organisation.
We know that auditing existing recruitment practices is important, so reviewing historical hiring data and identifying patterns of bias is key. This data is important in the future design of fairer AI systems and should also guide changes to manual recruitment processes. As part of this, companies must train AI on diverse data sets to ensure that AI systems are exposed to a wide range of candidate profiles which will help minimise the risk of biased decision-making.
But let’s return to human recruitment. Diverse panels must be involved in hiring decisions to ensure fairer outcomes and challenge unconscious bias. We should be prioritising human oversight. AI should complement, not replace, human judgment. It is my opinion that recruiters should review AI-generated recommendations and make the final hiring decisions based on a holistic evaluation of each candidate.
Using AI Responsibly
AI undoubtedly has the potential to provide recruiters with efficiency and perhaps insight. AI is here to stay, but we must use it responsibly. For organisations, this means recognising the limitations of AI and investing in systems and strategies that prioritise fairness and inclusivity. When we talk about strategies, we must not forget that human-led strategies, such as the new wave of networking we at DTN are leading, is something which breaks down organisational and unconscious bias and puts the right people in front of business leaders and decision-makers. It is not traditional networking as we know it!
Ultimately, the goal should not be to remove humans from the recruitment process but to use AI as a tool to enhance human decision-making. By combining the speed and precision of AI with the empathy and insight of human recruiters, organisations can work towards creating truly equitable workplaces—where bias has no place, and every individual has the chance to thrive. In a world where talent is the most valuable asset, companies cannot afford to let bias hold them back.
Lee Higgins set DTN up after 23 years in recruitment and executive search which saw him overseeing over 900 projects globally in M&A, private equity, asset & wealth management, energy and consulting. Lee set up DTN to help organisations engage talent from all backgrounds through the power of networking. Lee endeavours to create a future where talent wins.
Recent Comments on Stories