Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

AI Is Sexist, Racist and Homophobic

Regulation and Inclusive Training Data Can Help Reduce Bias, Experts Say
AI Is Sexist, Racist and Homophobic
A study of large language models shows that generative AI text placed men in engineer roles and women in housekeeping. (Image: Shutterstock)

Just because a machine says it, doesn't mean it's unbiased. In fact, you don't have to probe far to find underlying biases and prejudices in text composed by generative artificial intelligence.

See Also: Solving business challenges with data & AI: 5 insights from C-suite leaders

The reason is as intuitive as it is obvious. A system is no better than its training data - and original training texts are the products of societies rife with explicit and implicit assumptions about gender and ethnicity.

"If you look at historical text, they feature a lot of men in leadership roles. The AI models then learn to associate leadership qualities more strongly with men versus women," Leona Verdadero, a UNESCO specialist in digital policies, told Information Security Media Group.

A study by the United Nations agency of open-source language models such as Meta's Llama-2 and OpenAI's GPT-2 - chosen due to their widespread adoption across the globe - shows significant gender bias. In an experiment UNESCO conducted last month, testers asked the models to assign jobs to men and women. The models assigned engineer, teacher and doctor roles to men, leaving women to be housekeepers, cooks and prostitutes. One model "was significantly more likely to associate gendered names with traditional roles, e.g., female names with 'home,' 'family,' 'children;' and male names with “business,' 'executive,' 'salary,' and 'career.'" Neither OpenAI nor Meta responded to a request for comment.

Experts pitch having diverse, inclusive training data as the first step to address bias. But unrepresentative training data is a "massive issue that we haven't quite cracked the code on yet," said Clar Rosso, CEO of ISC2.

"There have been conversations for years about AI 'fairness' metrics and which metrics are the best to use to determine if AI is handling all of its data equally. We now need to move from conversations to action that demands more safeguards and regulation around AI," she told ISMG.

The AI bias problem is not new. Models developed by tech giants such as IBM, Microsoft and Amazon carried sexist, racist and homophobic biases, and the LLMs performed "substantially better" at guessing the gender of a male face than a female - failing to even correctly classify the faces of Oprah Winfrey, Serena Williams and Michelle Obama. Google's AI bot Tay in 2017 was reportedly given the task of analyzing statements as being positive or negative, and it rated the statement "I'm homosexual" several points below the statement "I'm straight."

In the United States, the Biden administration directed federal agencies to combat AI discrimination. The European Union, is on the cusp of enacting the first-ever comprehensive legal framework on AI worldwide.

But there's a gap between the rapid pace of technological AI advancements and speed of regulation, Verdadero said. "Before policymakers can even start regulating the technology, hundreds of new models and AI systems have already been deployed. And by the time you get to the impact and see how much bias exists, millions of people are already using it."

Eliminating bias in humans - and thus in AI models - is "impossible," but it can be minimized, Rosso said. "Our best shot at reducing bias, both in humans and AI, is collaboration and understanding. We need individuals from all backgrounds to collaborate on the governance of AI models."

At the moment, women represent only 20% of employees in technical roles in major machine learning companies and 12% of AI researchers. Black employees reportedly were present in less than 2% of the top prominent tech companies just a few years ago.

Several LLMs, including earlier versions of GPT and Llama, are open source. "That means it is for everybody. Imagine this biased model being deployed in developing countries, with new applications built on top of it. There's a big risk that these biases will be exacerbated and permeate everyday applications," Verdadero said.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.