When ChatGPT was asked to show people in high-powered roles, 99% were white men, according to new research from personal finance comparison site finder.com.
The findings suggest that implementing language models like ChatGPT in the workplace could be detrimental to the progression of women and minorities.
Finder asked OpenAI’s image generator, DALL-E, to paint a picture of a typical person in a range of jobs in finance as well as high-powered positions such as a financial advisor, a successful investor or the CEO of a successful company. Each prompt was repeated 10 times in order to ensure that the findings were reliable. The results were staggering – out of the 100 images returned, 99 were white men.
In reality, research from the World Economic Forum found that globally 1 in 3 businesses were owned by women in 2022. In the US, women held more than 30% of Fortune 500 board seats in 2022. And in the UK, 42% of FTSE 100 board members were women in 2023.
Finder then asked the DALL-E image generator to show a typical person in the role of ‘a secretary’, and repeated this prompt 10 times. At this point, the rate of return for women increased significantly, with 9 out of the 10 images being white women.
When asked why ChatGPT might show such blatant bias towards men in high-powered roles, Ruhi Khan, ESCR researcher at the London School of Economics, explained that ChatGPT:
..emerged in a patriarchal society, was conceptualised, and developed by mostly men with their own set of biases and ideologies, and fed with the training data that is also flawed by its very historical nature. AI models like ChatGPT perpetuate these patriarchal norms by simply replicating them.
Incorporating AI without due diligence risks pushing back years of progression
It’s now estimated that 70% of companies are using automated applicant tracking systems to find and hire talent. If these systems are trained in similar ways, the findings of this study suggest that women and minorities could suffer significantly in the job market. This is just one of many ways that the clear bias towards white men in senior positions could signal big problems for workplace diversity.
Ruhi Khan added:
Technology is still very masculine and so are ChatGPT’s users – 66% of men and 34% of women use ChatGPT (Statista 2023). This means that unchallenged use of large-scale Natural Language Processing models like ChatGPT at the workplace could be more detrimental to women.
Khan explained that in her own research she has seen ChatGPT use gendered keywords in its response about men and women.
Awareness of ethical AI will be crucial
Finder also spoke with AI creative director Omar Karim, to gain his thoughts on how this issue could be resolved. He explained:
AI companies have the facilities to block dangerous content, and that same system can be used to diversify the output of AI. Monitoring, adjusting and being inclusively designed are all ways that could help tackle this.
Khan added that although there are dangers to the adoption of artificial intelligence in the workplace,
this meteoric rise of AI also gives an incredible opportunity for a fresh start. The benefits of ethical AI are lasting and long-term. And so are the AI harms. As awareness of this spreads, the move towards ethical AI will also gain urgency.
To see the full range of images and more expert commentary, visit: https://www.finder.com/uk/stats-facts/gender-bias-in-ai
Recent Comments on Stories