This is the last installment of a five-part WorldPost series on the world beyond 2050. The series is adapted from the Nierenberg Prize Lecture by Lord Martin Rees in La Jolla, Calif. Part one is available here. Part two is here. Part three is here. Part four is here.
The stupendous timespans of the evolutionary past are now part of common culture -- outside fundamentalist circles, at any rate. But most people still tend to regard humans as the culmination of the evolutionary tree. That hardly seems credible to an astronomer. Our sun formed some 4.5 billion years ago, but it's got around 5 billion more before the fuel runs out. And the expanding universe will continue -- perhaps forever. To paraphrase Woody Allen, eternity is very long, especially towards the end.
The timescale for developing human-level artificial intelligence may be decades or it may be centuries. Be that as it may, it's but an instant compared to the cosmic future stretching ahead, and indeed far shorter than the timescales of the Darwinian selection that led to humanity's emergence.
There must be chemical and metabolic limits to the size and processing power of "wet" organic brains. Maybe we're close to these already. But fewer limits constrain electronic computers -- still less, perhaps, quantum computers. For these, the potential for further development over the next billion years could be as dramatic as the evolution from Precambrian organisms to humans. So, by any definition of "thinking," the amount and intensity that's done by organic human-type brains will be utterly swamped by the future cogitations of AI.
Moreover, the Earth's environment may suit us organics, but it isn't optimal for advanced AI -- interplanetary and interstellar space may be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological "brains" may develop powers -- and a level of scientific achievement -- that humans can't even imagine.
This scenario suggests to me, incidentally, that if the Search for Extraterrestrial Intelligence Institute were ever to detect some signal that was manifestly artificial -- and none of us is holding our breath for this, of course -- it would most likely come from some free-floating inorganic "brain" rather than from a civilization on an Earth-like planet.
So, even in this "concertinaed" timeline -- extending billions of years into the future, as well as into the past -- this century may be a defining era. The century when humans jump-start the transition to electronic -- and potentially immortal -- entities that eventually spread their influence far beyond the Earth and far transcend human limitations. Or, to take a darker view, the century where our follies could foreclose this immense future potential.
It's probably a good thing that I've no time to speculate further beyond the flakey fringe. So let's focus back closer to here and now.
One lesson I'd draw from the issues I've raised in this series is this. We fret unduly about small risks -- air crashes, carcinogens in food, low radiation doses, etc. But we're in denial about some newly emergent threats, which may seem improbable but whose consequences could be globally devastating. Some of these are environmental, others are the potential downsides of novel technologies.
We mustn't forget an important maxim: the unfamiliar is not the same as the improbable.
These near-existential threats surely deserve expert analysis -- to assess that which can be dismissed firmly as science fiction and which could conceivably become real; to consider how to enhance resilience against the more credible ones; and to warn against technological developments that could run out of control.
To this end, we've founded in Cambridge a group with just such aims, and there are a few similar initiatives elsewhere. The stakes are so high that even if these groups can reduce the probability of catastrophe by one part in 1,000, they'll have earned their keep.
Obviously, dialogue with politicians can help. But scientists who've served as government advisors have often had frustratingly little influence.
Politicians are, however, influenced by their inbox and by the press. Experts can sometimes achieve more as scientific citizens and activists via widely read books, campaigning groups or blogging and journalism. They have an obligation to engage -- to inform and enrich public debate. But they should always be mindful that on the economic, social and ethical aspects of any policy, they speak as citizens and not as experts.
If scientists' voices are echoed and amplified by a wide public, and by the media, long-term global causes will rise on the political agenda.
Those based in universities have the special privilege of influencing successive generations of students from many nationalities.
Opinion polls show, unsurprisingly, that younger people who expect to survive most of the century, are more engaged and anxious about long-term and global issues. What should be our message to them?