Subscribe now

Technology

Why AI resorts to stereotypes when it is role-playing humans

The often stereotyped and offensive responses from AI chatbots role-playing as humans can be explained by flaws in how large language models attempt to portray demographic identities

By Jeremy Hsu

18 February 2025

Woman working on laptop and phone

AI models struggle to mimic people with particular demographic identities

whitebalance.oatt/Getty Images

Artificial intelligence models from OpenAI and Meta often resort to simplistic and sometimes racist stereotypes when prompted to portray people of certain demographic identities – a notable flaw at a time when some tech companies and academic researchers want to replace humans with AI chatbots for some tasks.

Companies such as Meta have already tried boosting engagement on social media platforms like Facebook and Instagram by deploying AI chatbots that mimic human profiles and respond to people’s posts. Some researchers have also explored using AI chatbots to simulate…

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox. We'll also keep you up to date with New Scientist events and special offers.

Sign up

To continue reading, subscribe today with our introductory offers

Piano Exit Overlay Banner Mobile Piano Exit Overlay Banner Desktop