Subscribe now

Letter: Why can't neural networks explain?

Published 5 August 2015

From Tamara Quinn

Your article on neural networks stated that “no one knows how the neural networks come up with their answers” and even that it is “impossible” to know how they do it (11 July, p 20). Could this be because research has, so far, focused on constructing neural networks that produce correct answers, rather than networks that also tell us how they work?

Surely it is possible to add self-monitoring and recording functions, so that the networks can reveal the paths taken to reach the observed outcome. Or is there some fundamental feature of neural networks that means this wouldn’t reveal their secrets?
London, UK

Issue no. 3033 published 8 August 2015

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox. We'll also keep you up to date with New Scientist events and special offers.

Sign up
Piano Exit Overlay Banner Mobile Piano Exit Overlay Banner Desktop