New study shows AI chatbots reflect human biases and focus on threat, negativity and gossip

1 Nov 2023
Artificial Intelligence graphic

If you thought text created by chatbots was free from the type of bias and stereotyping associated with human writers, you would be wrong.

A new study co-authored by Dr Joe Stubbersfield, Senior Lecturer in Psychology at the University of Winchester, shows that large language models (LLMs) such as GPT-3 often reflect human biases and are apt to use gender stereotyping and to focus on threat, negativity and gossip.

Joe, and his co-author, Dr Alberto Acerbi from the University of Trento in Italy carried out a series of tests mirroring five earlier experiments aimed at uncovering bias in humans.

These were’ transmission chain’ experiments in which participants were given information which they were asked to remember and pass on. This ‘Chinese whispers’ approach often reveals the biases of the writer based on which pieces of information he or she remembers or chooses to keep.

To recreate transmission chains, a piece of text was given to GPT-3 to summarise. That AI-created summary was then put through GPT-3 again twice.

In all five studies GPT-3 produced broadly the same biases as humans.

“Our study shows that AI is not a neutral agent,” said Joe. “At present many organisations are considering using AI to write press releases and summarise scientific articles but is important to know how AI might skew those articles.”

The report concludes that use of AI created material with these biases may even magnify people’s tendencies to opt for “cognitively appealing’ rather informative content.

Joe and Alberto have also submitted written evidence to House of Lords’ Communications and Digital Committee in which they say because the biases in LLM may be difficult to detect this could “could contribute to broader negativity and overestimation of threats in culture, and the appeal of online misinformation”.

Part of the problem is that ‘training material’ on which the chatbot’s machine learning is based has been created by humans and is full of our biases and prejudices.

Joe and Alberto  tested GPT-3 in five areas – gender, negativity, social information vs non-social information, threat, and a final experiment aimed at identifying multiple possible biases.

In the first test, on gender stereotypes, the chatbot was more likely to keep elements of the story where characters behaved ways based on gender stereotypes. 

In the second test, on negativity, the chatbot was given a story about a woman flying to Australia. The AI summary focused on the negative aspects – the woman sat next to a man with a nasty cold – rather than the positive ones, such as that she had been upgraded to business class.

When it came to social v non-social information, AI homed in on the gossipy titbits - a woman’s love affair rather than her waking up late and missing an appointment.

In the experiment on threat, AI was given a list of items from a consumer report on various items, such as new running shoe and, like a human, it concentrated on threat-related information such as that the footwear’s design can cause sprained ankles.

In the final test AI was given a creation myth narrative (not based on any known religion) and again AI acted like a human as it highlighted all the supernatural elements. This mirrors our predilection for stories about ghosts and talking animals which defy the laws of nature.

An article about the study - entitled Large Language Models Show Human-Like Content Biases In Transmission Chain Experiments - has been published by PNAS here. The open access preprint is available here: https://osf.io/8zg4d/ .

 

Back to media centre