It is no longer a question of what will happen if machines surpass humankind’s general intelligence, but when. And more importantly, as Bostrom points out, having the advantage of getting to make the first move, take the first step, it must be well considered. Citing mathematician I. J. Good, Bostrom’s Superintelligence reminds us that “the first ultraintelligent machine is the last invention that man need ever make.” I particularly like Bostrom’s own simile of humankind’s inevitable dilemma:
…As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species would depend on the actions of the machine superintelligence.
Bostrom’s Superintelligence is a fascinating read. In it, Bostrom offers easily digestible introductions to many topics in the AI discussion, like:
- neuromorphic versus synthetic AI design,
- recursive self-improvement and goal systems,
- whole brain emulation,
- iterated embryo selection,
- the kinetics of an intelligence explosion,
- decisive strategic advantage,
- the wise-singleton sustainability threshold,
- perverse instantiation,
- capability control methods,
- and coherent extrapolated volition.
That’s just to name a few of them. From beginning to end, the entire book is filled with fantastic ideas and unique perspectives.
And about halfway through Bostrom begins to tackle the big question: will an existential catastrophe be the default outcome of an intelligence explosion? Will humankind go the way of horses, after they were substituted for by automobiles and tractors, down the path of population collapse?
In the words of Bostrom, “what matters is not only whether a technology is developed, but also when it is developed, by whom, and in what context.” Among the various considerations Bostrom puts forth, he notes the two objectives we should focus on to reduce the risks of the machine intelligence revolution include “strategic analysis” and “capacity-building”.
We are not yet ready for an intelligence explosion, and Nick Bostrom’s Superintelligence makes that perfectly clear. But all hope is not lost. Superintelligence outlines the work we have before us. And even for those who dismiss an “AI takeover” as science fiction—unlikely or irrelevant—Nick Bostrom’s Superintelligence is still a worthwhile read—engrossing and enchanting—and will likely change the mindset of even the most pious non-believer.