Content area
Full Text
Superintelligence: Paths, Dangers, Strategies . By Nick Bostrom . Oxford University Press , Oxford , 2014, pp. xvi+328. Hardcover: $29.95/ £18.99. ISBN: 9780199678112
Reviews
Nick Bostrom's Superintelligence could be about a God-Machine or Frankenstein-Machine that takes control of humanity for its own perverse purposes. Or the book could be about the dawn of a new age, the historical inevitability of machines-smarter-than-humans where humans become an extinct species. Or the book could be a more realistic and less euphoric as well as a concise version of Ray Kurzweil's The Singularity is Near:When Humans Transcend Biology (2005). Or, Superintelligence could be an updated and improved version of Nietzsche's thesis of the Superman who trans-values all values and goes Beyond Good and Evil.
I think the important message of Superintelligence is none of the above, and is straightforward: because philosophers of morality have been unable to decide which values are ultimate, and unable to explain how values are acquired and whether values are real or not, and most crucially unable to decide on criteria for choosing which values are ultimate, we have no way of teaching very smart systems, systems smarter than humans, the goals we want them to pursue that would be congenial to humanity. Consequently, we could produce a form of superintelligence where very smart machines have learned to control the universe and decide that humans are no longer needed, or just transform the universe into a place where the human species becomes extinct as the collateral damage of the physical and biological changes in the cosmos made by very powerful and knowledgeable superintelligent systems. In other words, because moral philosophy has failed to provide a moral consensus even among philosophers about the nature of ultimate values and goals, we are unable to provide a moral self-guidance component for superintelligent artificial systems. In spite of this failure on the part of philosophy, the author of Superintelligence, Nick Bostrom tentatively and cautiously offers techniques (such as coherent extrapolated volition, 211 ff.) that we could implant in superintelligent systems that could make up for the shortcomings of moral philosophy and provide superintelligent systems with a values-learning and morality-improvement component that might lessen, if not totally remove, the existential threat for humanity.
One might wonder...