December 26, 2013
Source: Shutterstock
The agent of its happening, says Barrat, will be the artificial intelligences (AI) we are beginning to create. We are, he says, a good part of the way to Artificial General Intelligence (AGI): machines competitive with ourselves in intellectual abilities, including self-awareness, intentionality, and guile“as in, the kind of guile needed to deceive us as to its abilities. Acting dumb could be an excellent AGI survival strategy.
And:
Since the design of machines is one of [humanity’s] intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.
Like the characters in a Vernor Vinge novel, we’d be sharing spacetime with ASI, Artificial Superintelligences”strange incomprehensible “powers” who are no more interested in us than we (with a few exceptions) are in ants.
This is one of those books that anticipates and answers all the objections that come to mind when you read a synopsis of it. Since AGI will be our creation, why don’t we just design it to be friendly? Aren’t self-awareness and intentionality uniquely human? What about the Chinese Room? And Moravec’s Paradox? (In AI, the hard things are easy, the easy ones hard. A computer that plays grandmaster-level chess? Easy. One that knows a dog from a cat on sight? Hard.) This book touches all bases.
Barrat, who makes science-themed documentary films for a living, comes in for some scorn in the Amazon reviews for reporting on a field in which he has no credentials. Pshaw: Science journalists do this all the time and are often more enlightening about the specialties they report on than are the specialists.
I thought Barrat did a good job. In addition to telling us what he thinks, he takes care to tell us what the experts think. At a conference of people active in AI research:
The breakdown was this: 42 percent anticipated AGI would be achieved by 2030; 25 percent by 2050; 20 percent by 2100; 10 percent by 2200; and 2 percent never.
He adds that “I got grief for not including dates before 2030.”
I can claim some slight acquaintance with this field, having attended a discussion group on AI spawned by a course in Mathematical Logic I took during my last year at university most of a lifetime ago. One of the first things I ever published touched on the problems we might have sharing our planet with AGI. (That was in the journal of the college’s Humanist Society, which I see still exists. Possibly my undergraduate lucubrations are in their archives.)
As I said, there’s an issue of temperament here. Some will scoff at the prospect of robo-wars; some will tremble. Who’s right, the scoffers or the tremblers? We”ll soon find out.