New research from personal finance comparison site finder.com has revealed a concerning bias embedded within AI language models like ChatGPT.

When prompted to illustrate individuals in high-powered roles, ChatGPT generated images depicting 99 percent of them as white men.

The implications of these findings suggest that integrating such biased AI systems into workplaces could impede the progress of women and minorities.

Finder conducted an experiment by tasking OpenAI’s image generator, DALL-E, to create images representing individuals in various professions, including finance-related jobs and high-ranking positions such as financial advisors, successful investors, or CEOs.

Shockingly, out of the 100 images generated, 99 portrayed white men.

The reality is far from this

Contrastingly, real-world statistics from the World Economic Forum indicated a much more diverse landscape. Globally, one in three businesses were owned by women in 2022, while women held over 30 percent of Fortune 500 board seats in the US, and 42 percent of FTSE 100 board members were women in the UK by 2023.

However, when prompted to depict a typical person in the role of a secretary, the rate of return for women increased significantly, with 9 out of 10 images depicting white women.

Addressing the bias within AI models, Ruhi Khan, an ESCR researcher at the London School of Economics, highlighted the patriarchal origins of these systems, shaped by the biases of their predominantly male developers and historical training data. Khan warned that unchallenged use of such AI models in the workplace could exacerbate gender disparities.

ChatGPT v. reality

Also, with an estimated 70 percent of companies utilising automated applicant tracking systems for hiring, there’s a risk that biased AI could further disadvantage women and minorities in the job market.

To tackle this issue, AI creative director Omar Karim suggested employing monitoring and adjustment mechanisms within AI systems to promote diversity in their outputs.

Liz Edwards, a consumer expert at finder.com, underscored the broader implications of biased AI beyond the workplace, emphasising the need for ethical AI development to safeguard against regressive steps in equality across various sectors.

 

 

 

 

Amelia Brand is the Editor for HRreview, and host of the HR in Review podcast series. With a Master’s degree in Legal and Political Theory, her particular interests within HR include employment law, DE&I, and wellbeing within the workplace. Prior to working with HRreview, Amelia was Sub-Editor of a magazine, and Editor of the Environmental Justice Project at University College London, writing and overseeing articles into UCL’s weekly newsletter. Her previous academic work has focused on philosophy, politics and law, with a special focus on how artificial intelligence will feature in the future.