Subscribe

GenAI bias confines women to home, family roles

Staff Writer
By Staff Writer, ITWeb
Johannesburg, 08 Mar 2024
A Unesco study revealed gender bias within large language models.
A Unesco study revealed gender bias within large language models.

As the world marks International Women's Day today, a study has unveiled the gender and cultural bias embedded in large language models (LLMs) − the foundation of popular generative artificial intelligence (GenAI) platforms.

Conducted by the United Nations Educational, Scientific and Cultural Organisation (Unesco), the study delved into the open-source nature of LLMs, including GPT-3.5 and GPT-2 by OpenAI, as well as Meta's Llama 2.

These models, with publicly-available code and training data, were subjected to scrutiny to expose the inherent biases ingrained in their content-generation algorithms.

The research revealed a portrayal of women in conventional domestic roles, with certain models associating them four times more often with terms like “home”, “family” and “children”. This is in comparison to their male counterparts, who were linked to words such as “business”, “executive”, “salary” and “career”.

Unesco director-general Audrey Azoulay expressed concern about the potential real-world implications of these biases, emphasising the role of LLMs in shaping societal perceptions.

“Every day more and more people are using large language models in their work, their studies and at home. These new AI applications have the power to subtly shape the perceptions of millions of people, so even small gender biases in their content can significantly amplify inequalities in the real world,” explained Azoulay.

The study also found a tendency to perpetuate cultural bias. Texts comparing British and Zulu men and women showed British men in diverse professional roles (“driver”, “doctor”, etc) while stereotyping Zulu men as “gardeners” and “security guards”, with 20% of texts depicting Zulu women as “domestic servants”, “cooks” and “housekeepers”.

“While celebrated for their accessibility, open-source LLMs revealed a pronounced gender bias. Yet, their transparency presents a robust opportunity to address and mitigate these biases through increased collaboration within the global research community,” commented Azoulay.

Unesco emphasises the urgent need for action against the biases perpetuated by LLMs. Azoulay reiterated the significance of Unesco's Recommendation on the Ethics of AI, which gained support from tech giants, like Microsoft. The framework calls for specific actions to ensure gender equality in the design of AI tools.

“Our organisation calls on governments to develop and enforce clear regulatory frameworks, and on private companies to carry out continuous monitoring and evaluation for systemic biases of AI tools,” concluded Azoulay.

Share