This isn’t a music review, although one is sorely overdue. This is not another book review either. It’s come down to the circumstance that I’ve been reading more interesting things than I’ve been hearing. Considering the current tone, it seems more important to allocate space to ideas rather than to sounds. Factor in the realization that we have a gross excess of entertainment, that our media system is dominated by useless and noxious advertising, that our understanding of the world and ourselves is bizarrely skewed, and that, despite all this bad news, humanity has been on the upswing for centuries, we still have a long way to go before we can brag about the sapience of our species.*
The future is the great unknown. Humans have been trying to peer into it since Stone Age times. Psychics, oracles, crystal balls, soothsayers & shamans, sticks & superstitions, fantasy & fairytale, priests & prophets & prognosticators, myth & magic : all failures. But we persistently attempt to predict. In recent times, we have progressed to the scientific method. We’re still not too good at prediction. In his article for the September 2016 issue of Scientific American, Kim Stanley Robinson asks the question, “Can we trust our own predictions?” He traces how we make predictions scientifically through straight line extrapolation, curves and cycles. He points out the shortcomings of these methods and compares them to science fiction. Weak as they are, scientific methodology and science fiction writers both expose the stab in the dark nature of flying by the seat of your intuitional pants.
His comparison of science with fiction is curiously similar to Neil Postman’s comparison of 1984 and A Brave New World, two works of fiction putting forth a plausible future based on the present conditions at the time they were written. Neither hit the mark, though Huxley’s has come closer. Robinson’s point is that there is no fixed future. The present sets up various possibilities, which depending on our choices, could go this way or that.
What prediction really comes down to is studying history, looking hard at our current moment in its planetary, biospheric and human aspects, and then—guessing.
Whereas Postman compares two Sci-fi books, Robinson compares Sci-fi to Science proper. Sci-fi, he says, is a guess of a different sort. It’s a guess that takes the present and projects it into a metaphorical future. It gives us hints about ourselves. A looking glass into how we might carry ourselves into the future. History shows us how our choices carried us to a Huxleyan world instead of an Orwellian.
There’s more to the story. Though science is still guessing, for the most part, there’s a range of possibilities into which science can narrow its view. And there are bigger pictures to consider that take the article to the bigger issues of climate change and population growth.
Given these realities, one thing the game of prediction can do is to try to identify those trends happening in human and planetary history that have such a large momentum they achieve a kind of inevitability, and one can confidently assert that “this is very likely to happen.” This strategy could be called “looking for dominants.”
A fun part of the article is his comments on ‘the singularity,’ the presumption that artificial intelligence will soon, someday, overtake humans and the world. As a prediction it “ignores many realities of the brain, computers, will, agency and history. As a metaphor, however, artificial intelligence stands for science.” Between this and the failure of science to make accurate predictions of the future, we find a growing popular fear and rejection of science, “. . . as we live in a culture of what might be called ‘scientism,’ which is another form of magical thinking.” The Singularity is not a danger, nor are technology and science the problem. It’s the threat of those who control the technology. It’s the threat of making emotional choices out of fear, rather than logical choices out of reason.
Technology, though powerful and growing more powerful, is always a set of tools created as a result of human choices. When we don’t make those choices, when they seem to “make themselves,” it really means we are making our decisions based on old data, old assumptions and unexamined axioms that are like oversimple algorithms. And when we do that, bad things can happen. The singularity, in other words, is code for blind reliance on science or the notion that we can science our way out of anything, when in fact we must continue to make decisions about how we use science and technology to develop as a species.
Read the full story : Scientific American, Volume 315, Number 3, page 81, “Can We Trust Our Own Predictions?,” Kim Stanley Robinson
*See the other book reviews—
Really, Really, Really
Everything Is Oblivious
Dubble Bubble