Mar 272012
 

The cover story in the March 3 issue of New Scientist is entitled The Deep Future: A Guide to Humanity’s Next 100,000 Years.

I found this cover story both shallow and pretentious. As if we could predict even the next one hundred years, never mind a hundred thousand.

They begin with an assurance that humans will still be around 100,000 years from now. They base this on the observation that well-established species tend to hang around much longer. True but… what we don’t have in the Earth’s prehistory is a species with the technological capability to destroy the Earth. This is something new.

So new in fact that we cannot draw far-fetched conclusions. Consider, for instance: nuclear weapons have been around for 67 years. In these 67 years, we managed not to start an all-out nuclear war.  Assuming, for the same of simplicity, that all years are created equal, the only thing we can conclude from this, if my math is right, is that the probability of nuclear war in any given year is 4.37% or less, “19 times out of 20” as statisticians sometimes say. Fair enough… but that does not tell us much about the “deep future”. Projected to 100,000 years, all we can tell on the basis of this 67-year sample period is that the probability of all-out nuclear war is less than 99.99……99854…%, where the number of ‘9’-s between the decimal point and the digit ‘8’ is 1941. Not very reassuring.

The authors of the New Scientist piece would probably tell us that even if nuclear war did break out, it would not wipe out humanity in its entirety, and they probably have a point, but it misses my point: namely the futility of making a 100,000-year prediction on the basis of at most a few thousand years of known history.

And while nuclear war may be a very scary prospect, it’s by far not the scariest. There are what some call technological singularities: developments in science and technology that are so profound, they would change the very basics of our existence. Artificial intelligence, for starters… reading about Google’s self-driving car or intelligent predictive search algorithms, about IBM’s Watson, or even Apple’s somewhat mundane Siri, I cannot help but wonder: is the era of true AI finally just around the corner? And when true AI arrives, how far behind is the nightmare of Skynet from the Terminator films?

Or how about genetically altered superhumans? They mention this, but only in passing: “unless, of course, engineered humans were so superior that they obliterated the competition.” Why is this scenario considered unlikely? Sometimes I wonder if we may perhaps be just one major war away from this: a warring party in a precarious situation in a prolonged conflict breeding genetically modified warriors. Who, incidentally, need not even look human.

I could go on of course, about “gray goo”, bioterrorism, and other doomsday scenarios, but these just underline my point: it is impossible to predict the course of history even over the next 100 years, never mind the next 100,000. This is true even from a mathematical perspective: exceedingly complex systems with multiple nonlinear feedback mechanisms can undergo catastrophic phase transitions that are almost impossible to predict or prevent. Witness the recent turmoil in financial markets.

Surprisingly, this overly optimistic New Scientist feature is very pessimistic on one front: space exploration. The first quote a figure of 115,000 years that would be required to reach Alpha Centauri at 25,000 miles an hour; this, of course, is a typical velocity for a chemically fueled rocket. The possibility of a better technology is touched only briefly: “Even if we figure out how to travel at the speeds required […], the energy required to get there is far beyond our means”. Is that so? They go on to explain that, “[f]or the next few centuries, then, if not thousands of years hence, humanity will be largely confined to the solar system”. Centuries if not thousands of years? That is far, far, far short of the 100,000 years that they are supposed to be discussing.

I called this cover feature shallow and pretentious, but perhaps I should have called it myopic. In that sense, it is no different from predictions made a little over a century ago, in 1900, about the coming “century of reason”. At least our predecessors back then had the good sense to confine their fortunetelling to the next 100 years.

 Posted by at 10:11 am