Subscribe now

Letter: Artificial intelligence and natural stupidity

Published 19 April 2017

From Thom Shaw, Perth, Western Australia

Progress in artificial intelligence has depended on computational speed and increasing algorithmic complexity. Some classes or type of problem are acknowledged where an AI's inhuman speeds can prosper (for example, by Daniel Dennett, 11 February, p 42).

But the human consequences of real world AI haven't been addressed. Modelling “subjective experience” will mean going from the real world to analogue measurements to digital models to logical outputs. But as any undergraduate studying computing discovers, problems arise from converting analogue measurements to digital values: there is always a “quantisation” error. And chaos theorists demonstrate how small changes have substantial effects on output from iterative computations. Multiple conversions would lead to multiple stages of quantisation and data compression. And as system complexity increases, system understanding decreases.

When personal loyalty or self-esteem bonds some people to the AI, what will become of those who distrust it? The real problems will arise in how humans react.

Issue no. 3122 published 22 April 2017

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox. We'll also keep you up to date with New Scientist events and special offers.

Sign up
Piano Exit Overlay Banner Mobile Piano Exit Overlay Banner Desktop