This is the last installment of a five-part WorldPost series on the world beyond 2050. The series is adapted from the Nierenberg Prize Lecture by Lord Martin Rees in La Jolla, Calif. Part one is available here. Part two is here. Part three is here. Part four is here.
The stupendous timespans of the evolutionary past are now part of common culture -- outside fundamentalist circles, at any rate. But most people still tend to regard humans as the culmination of the evolutionary tree. That hardly seems credible to an astronomer. Our sun formed some 4.5 billion years ago, but it's got around 5 billion more before the fuel runs out. And the expanding universe will continue -- perhaps forever. To paraphrase Woody Allen, eternity is very long, especially towards the end.
The timescale for developing human-level artificial intelligence may be decades or it may be centuries. Be that as it may, it's but an instant compared to the cosmic future stretching ahead, and indeed far shorter than the timescales of the Darwinian selection that led to humanity's emergence.
There must be chemical and metabolic limits to the size and processing power of "wet" organic brains. Maybe we're close to these already. But fewer limits constrain electronic computers -- still less, perhaps, quantum computers. For these, the potential for further development over the next billion years could be as dramatic as the evolution from Precambrian organisms to humans. So, by any definition of "thinking," the amount and intensity that's done by organic human-type brains will be utterly swamped by the future cogitations of AI.
Moreover, the Earth's environment may suit us organics, but it isn't optimal for advanced AI -- interplanetary and interstellar space may be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological "brains" may develop powers -- and a level of scientific achievement -- that humans can't even imagine.
This scenario suggests to me, incidentally, that if the Search for Extraterrestrial Intelligence Institute were ever to detect some signal that was manifestly artificial -- and none of us is holding our breath for this, of course -- it would most likely come from some free-floating inorganic "brain" rather than from a civilization on an Earth-like planet.
So, even in this "concertinaed" timeline -- extending billions of years into the future, as well as into the past -- this century may be a defining era. The century when humans jump-start the transition to electronic -- and potentially immortal -- entities that eventually spread their influence far beyond the Earth and far transcend human limitations. Or, to take a darker view, the century where our follies could foreclose this immense future potential.
It's probably a good thing that I've no time to speculate further beyond the flakey fringe. So let's focus back closer to here and now.
One lesson I'd draw from the issues I've raised in this series is this. We fret unduly about small risks -- air crashes, carcinogens in food, low radiation doses, etc. But we're in denial about some newly emergent threats, which may seem improbable but whose consequences could be globally devastating. Some of these are environmental, others are the potential downsides of novel technologies.
We mustn't forget an important maxim: the unfamiliar is not the same as the improbable.
These near-existential threats surely deserve expert analysis -- to assess that which can be dismissed firmly as science fiction and which could conceivably become real; to consider how to enhance resilience against the more credible ones; and to warn against technological developments that could run out of control.
To this end, we've founded in Cambridge a group with just such aims, and there are a few similar initiatives elsewhere. The stakes are so high that even if these groups can reduce the probability of catastrophe by one part in 1,000, they'll have earned their keep.
Obviously, dialogue with politicians can help. But scientists who've served as government advisors have often had frustratingly little influence.
Politicians are, however, influenced by their inbox and by the press. Experts can sometimes achieve more as scientific citizens and activists via widely read books, campaigning groups or blogging and journalism. They have an obligation to engage -- to inform and enrich public debate. But they should always be mindful that on the economic, social and ethical aspects of any policy, they speak as citizens and not as experts.
If scientists' voices are echoed and amplified by a wide public, and by the media, long-term global causes will rise on the political agenda.
Those based in universities have the special privilege of influencing successive generations of students from many nationalities.
Opinion polls show, unsurprisingly, that younger people who expect to survive most of the century, are more engaged and anxious about long-term and global issues. What should be our message to them?
It's surely that there's no scientific impediment to achieving a sustainable world, where all enjoy a lifestyle better than those in the West do today. We live under the shadow of new risks, but these can be minimized by a culture of responsible innovation -- especially in fields like biotech, advanced AI and geoengineering -- and by reprioritizing the thrust of the world's technological effort.
So we can be technological optimists. But intractable politics and sociology engenders pessimism. The scenarios I've described in this series -- environmental degradation, unchecked climate change and unintended consequences of advanced technology -- could trigger serious, even catastrophic, setbacks to our society. But they have to be tackled internationally. And there's an institutional failure to plan long-term and to plan globally.
"Spaceship Earth" is hurtling through the void. Its passengers are anxious and fractious. Their life-support system is vulnerable to disruption and breakdowns. But there is too little planning, too little horizon-scanning, too little awareness of long-term risk.
It would surely be shameful if we persisted in unsustainable policies that bequeathed to future generations a depleted and hazardous world. Wise choices will require the effective advocacy of natural scientists, environmentalists, social scientists and humanists -- all guided by the knowledge of what 21st century science can offer and inspired by values that science alone can't provide.
I conclude with my favorite quote from the great immunologist Peter Medawar: "The bells that toll for mankind are ... like the bells of Alpine cattle. They are attached to our own necks, and it must be our fault if they do not make a tuneful and melodious sound."
Also on WorldPost: