Subscribe now

Technology

AI trained on novels tracks how racist and sexist biases have evolved

Questioning a chatbot that has been trained on bestselling books from a particular decade can give researchers a measure of the social biases of that era

By Matthew Sparkes

20 February 2025

Books can document the cultural biases of the era when they were published

Ann Taylor/Alamy

Artificial intelligences picking up sexist and racist biases is a well-known and persistent problem, but researchers are now turning this to their advantage to analyse social attitudes through history. Training AI models on novels from a certain decade can instil them with the prejudices of that era, offering a new way to study how cultural biases have evolved over time.

Large language models (LLMs) such as ChatGPT learn by analysing large collections of text. They tend to inherit the biases found within their training data:…

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox. We'll also keep you up to date with New Scientist events and special offers.

Sign up

To continue reading, subscribe today with our introductory offers

View introductory offers

No commitment, cancel anytime*

Offer ends 2nd of July 2024.

*Cancel anytime within 14 days of payment to receive a refund on unserved issues.

Inclusive of applicable taxes (VAT)

or

Existing subscribers

Sign in to your account
or

Register for FREE to read this article in full

Register to access free content
Piano Exit Overlay Banner Mobile Piano Exit Overlay Banner Desktop