Subscribe now

Letter: Machine learners that just can't explain

Published 16 November 2016

From Alan Bundy

Edinburgh, UK

Brian Horton suggests, possibly in jest, that we find out how people distinguish heroes from villains by letting an artificial intelligence watch thousands of films (Letters, 22 October). But statistical machine learning programs are notoriously unable to explain what they have learned.

If their training is successful, they will work out how to distinguish heroes and villains in previously unseen films, but will not be able to explain how they do it. This is a serious failing for some potential applications. For instance, doctors take professional responsibility for their diagnoses and treatments. They cannot accept the output of a black box without some kind of explanation.

Issue no. 3100 published 19 November 2016

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox. We'll also keep you up to date with New Scientist events and special offers.

Sign up
Piano Exit Overlay Banner Mobile Piano Exit Overlay Banner Desktop