As concerns around generative AI in recruitment grow, the EU is introducing legislation for how companies implement machine-learning tools, says Malcolm Burenstam Linder.

But there is no need to fear – AI can help recruiters significantly reduce bias in hiring, and the EU AI Act is bringing much-needed regulation to this exploding sector, says Malcolm Burenstam Linder, CEO and Co-founder of digital hiring platform, Alva Labs.

Everywhere you look now – every news outlet, every LinkedIn post, even every conversation – one thing keeps cropping up: AI. It is the hottest topic of the year, the CEO’s buzzword, the social media manager’s guarantee for going viral. Wherever you turn, you cannot get away from it – it is stealing our jobs, it is ruining our brains – in fact, 42 percent of CEOs agreed that AI will destroy humanity.

And yet, in the midst of all this fearmongering and ‘Apocalypse Now’-style rhetoric, there is a really useful piece of technology. Terrifying though it may sound, artificial intelligence can help us modify, amplify and enhance our work practices, particularly when it comes to HR. HR professionals are going to have to tackle their fears surrounding AI, and they’re going to have to confront them now. Because the reality isn’t as scary as it seems.

The role of AI in hiring

For years now, I have worked to bring data-driven and objective hiring to the HR world, by using machine learning software as a tool for solving existing problems in the hiring process. And there are huge problems in the way companies structure their hiring processes, which are a constant challenge: how do you ensure that you’re attracting the top talent? How can you guarantee that you are actually hiring the one person most suited to the job on offer? And then there is the most glaring problem within recruitment – the fact that humans are fallible and biased, and we make wrong decisions all the time.

There are, of course, questions and problems that crop up surrounding the role of AI in hiring – from how recruiters can spot AI-generated CVs, to how algorithms misunderstand and misinterpret candidates’ speech in video interviews, to the more philosophical question around how humans use AI. Why, when we hear so much about how damaging and dangerous AI can be, should we use it in something as crucial as the hiring process?

The EU AI Act

With the recent implementation of the EU AI Act, I’ve been thinking a lot about how people have interpreted the legislation. Some have seen it as a sign that AI is dangerous; so dangerous, in fact, that the EU has to regulate it. Others, including the creator of ChatGPT, OpenAI CEO Sam Altman, have declared that the EU is so afraid of AI that they’ve introduced legislation that’s too stringent, and will limit the development and potential of such exciting technology. As is so often the case in today’s society, this isn’t a binary issue, and it shouldn’t be portrayed as such. There is a middle ground – and I’m occupying it with pride.

It certainly sounds as though AI should be intimidating, particularly if there are laws ‘against’ it. What is crucial to bear in mind as we think about the EU AI Act is that it is regulation – rules in place to control how AI is used, and where. It is not trying to stop the development of AI by any means, but it is ensuring that when AI is used, it is being held to high ethical standards and frequent auditing. At this stage, we all have to realise that AI is far from a fad; it is here to stay. We must, therefore, all adapt the ways we work to accommodate and incorporate AI, and we have to learn to see the many benefits that artificial intelligence can offer the world of HR.

AI is a timesaver

Firstly, AI is a huge timesaver, for both recruiter and candidate. AI and machine-learning-powered systems streamline the hiring process, meaning that there are fewer steps required, less admin for the hiring manager, less time required by the candidate and overall a lower cost. Secondly, AI assessment platforms can be trained to identify the ‘skills that help build skills’ – for example, the job description might require someone with proven problem-solving skills, but assessments powered by artificial intelligence can identify someone who displays those skills – without having them listed on their CV.

But most importantly, we can train AI to be less biased than humans, and that’s why it is such a useful tool with incredible potential for hiring managers. Unconscious bias is so ingrained in our ways of being that we are often reluctant or resistant to admitting it. Outsourcing pre-interview steps to assessments that use AI, based on scientific research, as I do, allows for your interview screening to be done with scientifically-proven, significantly reduced levels of bias.

What can we take away from this?

Obviously, there are legitimate concerns about bias which can appear in AI as well. You do not want an AI system trained to rule out candidates with gaps in their CVs, for example, as you will end up alienating parents (and in most cases, those with the longer gaps are women, so the system will be biased towards hiring men). Similarly, specific education requirements can exclude candidates from under-privileged or unconventional upbringings. The way to use AI in recruitment effectively is to bring in sets of regulations and auditing systems that can measure how the artificial intelligence is working: setting out guidelines for how AI should be trained and what standards AI in hiring should be meeting. Only by doing this, which we at Alva are committed to doing, can we ensure that we’re using artificial intelligence to help us achieve more – and to use it for good, not for bad.

__

Malcolm Burenstam Linder is the CEO and Co-Founder of Alva Labs.

 

 

 

 

Amelia Brand is the Editor for HRreview, and host of the HR in Review podcast series. With a Master’s degree in Legal and Political Theory, her particular interests within HR include employment law, DE&I, and wellbeing within the workplace. Prior to working with HRreview, Amelia was Sub-Editor of a magazine, and Editor of the Environmental Justice Project at University College London, writing and overseeing articles into UCL’s weekly newsletter. Her previous academic work has focused on philosophy, politics and law, with a special focus on how artificial intelligence will feature in the future.