From Ben Haller
Ah, more hype about Cyc, the artificial brain that will supposedly usher in the dawn of true AI (23 April, p 32). Forgive me if I’m sceptical, but “hoovering up new facts” is not the same as learning, as any college student could testify after spending all night cramming only to fail their exam. After it hoovers up all the various web pages about global warming, for example, and assimilates all the contradictory information about that topic that rains down upon its head, will it have an opinion on the subject that it will be able to lucidly defend? Somehow I doubt it. When we can’t make an AI that can converse for more than a few seconds without making a fool of itself, and we can’t make an AI that can drive itself a few kilometres across the desert, Cycorp founder Doug Lenat’s assertion that we are “less than 10 years” from a singularity is laughable. Cyc may well reach the point where it can regurgitate uncontroversial facts that are logically organied in simple ways. But it will still fail its exam resoundingly.
An interesting perspective on this is provided by the fact that Cyc currently contains 3 million assertions, and that this apparently comes from an attempt to minimise the amount of data it’s “born” with. Contrast this with a newborn baby; it’s hard to say just how many “assertions” a baby is born with, but it seems certain that, unlike Cyc, it is not born with “the entire Linnaean system of taxonomy of plants and animals” as Cyc apparently will be. To me, this illustrates that AI is still not only barking up the wrong tree, but is in the wrong forest. The first successful AI will be like a newborn baby: essentially completely naive, but designed to be so flexible as to be capable of learning any language, assimilating any fact, adapting to any environment. We need to think much harder about the simplicity of the human brain, and about how complexity emerges from that simplicity.
Menlo Park, California, US
