Subscribe now

Analysis and Technology

How can we prevent AI from being racist, sexist and offensive?

Artificial intelligences continue to exhibit the same biases and prejudices as humans because they are trained on what we create, but there are ways we can improve the situation

By Matthew Sparkes

17 June 2022

Cloud blocks forming faces in sky

Artificial intelligences learn from what humans write on the internet, so build up pictures that may be biased

Colin Anderson Productions pty ltd/Getty Images

Stories of artificial intelligences exhibiting racist and sexist bias are common, including face recognition algorithms struggling to work for Black people and tools assessing whether a convicted criminal will reoffend treating white people more leniently. Despite years of efforts to make AI fair, these issues don’t seem to be going away, so what can be done about them?

Current AI research is focused heavily on machine learning, in which a model is…

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

Sign up

To continue reading, subscribe today with our introductory offers

View introductory offers

No commitment, cancel anytime*

Offer ends 2nd of July 2024.

*Cancel anytime within 14 days of payment to receive a refund on unserved issues.

Inclusive of applicable taxes (VAT)

or

Existing subscribers

Sign in to your account