September 01, 2008

Past a certain age you start to feel like a character in one of those Left Behind books. Remember the plot? True Christians are “raptured” up into heaven at the End Time, their prostheses, glasses, dentures, and IUDs clattering to the floor as their bodies disappear. The less pure in heart (that’d be me, I’m pretty sure) are left behind on Earth to face the Tribulation.

We earliest-boomers can relate. “Left behind” is how we feel when we see kids lost”€”enraptured!”€”in whatever it is (don’t ask me) that comes out of an iPod, or “texting,” (sorry, no idea) on their cellphones, or enthusing about EEG gaming headsets (I refuse to believe it). They have gone off to some place we can’t get to. We are left behind to face a world increasingly hostile to things we hold dear: newspapers, letters, board games, and live music. A world, in other words, in which the power of the Antichrist waxes stronger by the hour.

There’s some small consolation in the prospect, if it is a prospect, that the kids”€”and not likely us”€”will face something far more traumatic: the Singularity! This is the postulated point in the not very distant future when we shall be sharing our living space with entities, either electronic or biological, much smarter than ourselves, entities whose cognitive abilities are in relation to ours as ours are to a chimp’s, or a dog’s, or perhaps (see below) a fruit fly’s.

There are three ways we might encounter superminds. We might:

“€¢ Make them.
“€¢ Become them.
“€¢ Hear them.

To make them, we need to realize the dream of true artificial intelligence, presumably by reverse-engineering the brain, working out a few improvements, and instantiating the result in some kind of electronic device. To become them, we shall have to take control of evolution and crank it up, compressing a million years or so of brain evolution into a couple of generations. To hear them, we need some results from SETI, the Search for Extraterrestrial Intelligence.

Discussions of the Singularity usually set aside that third possibility. For one thing, we have been listening to the stars for half a century now, with no results at all. What has been called the Great Silence is once of the scientific mysteries of our age. Possibly there is nobody out there; or perhaps we are just laboring under some crude misconception about how interstellar civilizations communicate, as if men of the Paleolithic were to stand on the seashore listening for the sound of drums across the ocean. Even if we were to pick up signals, we’d only be listening. As sensational as the encounter would be, it would probably have no direct effect on us. (Though possibilities have been imagined: see here and here.)

Singularitarians mainly concern themselves with the first two possibilities: developments in either computer science or neurobiology leading to artificial minds that are superior to ours. The central notion here is that once such superminds exist on Earth, nothing can be said about the subsequent state of affairs. We can’t even guess at what such cognitive powers would do. Even if we saw it being done, we wouldn’t be able to grasp it, any more than a dog can grasp the plot of a TV sitcom, much less Hamlet. The Popular Mechanics image of the future”€”personal helicopters, vacations on Titan, mile-high buildings, robo-butlers, the re-growing of amputated limbs”€”can’t be applied to a future with a Singularity in it. Those are just forward extrapolations of things we currently understand. On the other side of a Singularity are things we don’t understand and can’t. The Singularity is an opaque barrier, beyond which we cannot see.

The general notion of a singularity is not well explained anywhere that I can find, so I’ve attempted my own explanation here.

There are a number of current, well-argued opinions about the Singularity. One rather commonly held is that the whole idea is hogwash. In regard to artificial intelligence, the scoffers point out correctly that hopes for AI have been around as long as hopes for fusion power (or for that matter, SETI), with similar results. They further note, also correctly, that there are deep systemic dissimilarities between our best information-processing devices and brains. Doug Hofstadter, who has been thinking about thinking for forty-odd years (and brought out a new book on the subject last year) declares himself not interested in computers because “They don’t have concepts.”

Option 2 supposes that by some trick of biology, the four or five million years that constitutes the chimp-human gap (all right: to be exact, the gap between humans and the human-chimp common ancestor) can be squinched down to at most a few decades. We wouldn’t have to reverse-engineer the brain to do this, but we would need an understanding of brain evolution several orders of magnitude more detailed than we currently have. And then, in free consensual societies at any rate, there are ethical issues. Even if we could figure out what to do, we might, by common agreement, decide not to do it.

Thus the scoffers. Talk of the Singularity is just wishful thinking, they say”€”“the Rapture for nerds.” They could of course be right, though it’s worth noting that in a fairly data-free environment like this, the wishful-thinking charge could equally well be leveled against the scoffers. It’s not hard to see why certain personality types would want there to be a Singularity in our near future. It’s also not hard to see why certain other personality types, or persons with certain metaphysical commitments, would, with equal passion, want there not to be.
What do the experts think?”€”people, that is, who actually know lots of stuff about neuroscience, information theory, and evolutionary biology?

They’ve been telling us. The Institute of Electrical and Electronics Engineers (IEEE) has been having a forum on the Singularity, and it makes fascinating reading. The whole thing is here. It includes good thumbnail sketches of the main arguments pro and con in poster form here.
There’s a lot of cold water being splashed around in the IEEE forum. Some of the coldest comes from John Horgan, author of the 1999 pop-neuroscience classic The Undiscovered Mind. Says Horgan:

The singularity is a religious rather than a scientific vision … Such yearning for transcendence, whether spiritual or technological, is all too understandable. Both as individuals and as a species, we face deadly serious problems, including terrorism, nuclear proliferation, overpopulation, poverty, famine, environmental degradation, climate change, resource depletion, and AIDS. Engineers and scientists should be helping us face the world’s problems and find solutions to them, rather than indulging in escapist, pseudoscientific fantasies like the singularity.

Neuroscientist Christof Koch throws a wet blanket over that business of reverse-engineering the human brain:

Consider this sobering lesson: the roundworm Caenorhabditis elegans is a tiny creature whose brain has 302 nerve cells. Back in 1986, scientists used electron microscopy to painstakingly map its roughly 6000 chemical synapses and its complete wiring diagram. Yet more than two decades later, there is still no working model of how this minimal nervous system functions. Now scale that up to a human brain with its 100 billion or so neurons and a couple hundred trillion synapses…

It’s not all negativity, though. Christof Koch himself goes on to propose some more promising lines of attack, and to aver that artificial minds will eventually be created. Along the way he gives a very nice run-down on the surprising number of things that are not essential in order for consciousness to be present:

“€¢ sensory input and motor output
“€¢ emotion
“€¢ attention
“€¢ memory
“€¢ self-reflection
“€¢ language

Also on the upbeat side, and apparently undeterred by our failure with roundworms, some folk at the Howard Hughes Medical Institute are tackling the fruit fly brain. This may actually be more promising, as fruit flies are the most studied of all creatures, especially by geneticists. Still, the scale of the project is intimidating. A fruit fly brain is barely visible”€”about one-eightieth of an inch from side to side. Yet the full mapping of one such speck will need, the researchers are estimating, about a million gigabytes of data storage, and “To get any good data, you’d have to compare hundreds of fruit-fly brains.” Says the project director: “In a hundred years I’d like to know how human consciousness works. The 10- or 20-year goal is to understand the fruit-fly brain.”

On balance the IEEE forum comes up with more skepticism than enthusiasm. Their poll of ten leading mind-science thinkers and information entrepreneurs turned up:

Singularity will occur? Four “never,” one “distant future,” three “within 70 years,” and one “sort of.”

Machine intelligence will occur?  Three “yes,” two “no,” and five DK or no response.

With all that real expertise on offer, there doesn’t seem much point in a lay person offering his opinion, but of course I will anyway.

In support of the scoffers, it was instructive to compare the eighth biennial “Science of Consciousness” conference”€”the one I blogged back in April”€”with the first of the series fourteen years earlier, written up in John Horgan’s aforementioned book. The comparison is not kind to the idea that we are hurtling up an exponential curve in our understanding of mind science. Far from our having any answers, it is still not clear to me that we are asking the right questions. Without knowing what minds are, how do we even know that better ones are possible? Perhaps present human intelligence is some sort of pinnacle”€”as good as it gets.

On other hand, neuroscience is hot. Terrifically smart young people like this one are flocking into the field. David Brooks is writing a book about it”€”how hot is that!? The problems to be tackled are stupendous”€”think of that 20-year project on the fruit fly brain. The fact that the researchers have set themselves such a long schedule, though, shows that nobody is underestimating the difficulties. The breezy optimism of the early AI researchers I encountered as a student in the 1960s is long gone.

And the Popular Mechanics analogies cut both ways. True, we never did get personal helicopters or mile-high buildings. We did, though, get the internet and instantaneous search engines, which nobody foresaw. Sudden startling breakthroughs can crack the future wide open in unexpected ways.

Will human beings now alive indeed share the world with entities much, much smarter? I am skeptical”€”mostly about whether we can solve these problems. Perhaps the mind is itself a singularity, like the Big Bang, beyond which knowledge cannot pass. Having seen some of the energy, enthusiasm, and dogged determination that is now flowing into the mind sciences, though, I will offer this admittedly feeble prediction: if superminds can be made, they shall, and given the possibility, seventy years looks about right for the attainment.

So my kids might well live to see the Singularity. Not me, though. Thank goodness!”€”I can’t even keep up with the darn kids.

John Derbyshire is a contributing editor of National Review and the author of, most recently, Unknown Quantity: A Real and Imaginary History of Algebra.

Columnists

Sign Up to Receive Our Latest Updates!