25 September 2014


Nick Bostrom. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014

The subject of artificial intelligence is of great interest these days, as a result of developments in semi-autonomous machines, such as cars which are capable of driving safely and efficiently to their destinations without any help from human drivers.
The book begins by giving a brief summary of the development of technology, eventually leading to the invention of computers in the 1940s. It was thought then that development of the computers would quickly lead to the production of intelligent machines. Of course, the technical difficulties were much greater than the computer pioneers had imagined.

Bostrom sees the further development of computers as eventually leading to superintelligence, which he tentatively defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". This means that superintelligence is a system having superior general intelligence rather than being highly specialised, like a chess-playing program.

Three different possible forms of superintelligence are identified: speed superintelligence, which can do all that human intellect can do but much faster; collective superintelligence, a system composed of a large number of smaller intellects, so that overall performance is much better than that of any other cognitive system; and quality superintelligence, a system at least as fast as the human mind and vastly smarter.

In chapter 4 we are introduced to the concept of an "intelligence explosion". The first phase begins when the system reaches the human baseline for individual intelligence. The second phase begins when it becomes capable of gaining more power from its own resources. Bostrom admits, though, that at present it is unclear how difficult this would be.

He develops his ideas to the point where he considers it possible for a "digital superintelligent agent" to take control of the world. It could become so powerful as to become a singleton, defined as "a superintelligent agent that faces no significant rivals or opposition". The outcome of the transition to a singleton could be very good or very bad. This is related to the control problem, finding the procedures to be used to prevent superintelligent agents from getting out of control and thus possibly endangering the continued existence of humanity.

One method of control suggested is to define a set of rules or values which will cause a superintelligent AI (artificial intelligence) to act safely and beneficially. The classic example of this is Isaac Asimov's "three laws of robotics", to prevent robots hurting humans but, as Bostrom points out, it fails in interesting ways, providing fertile plot complications for his stories.

The discussions in this book are philosophical rather than technical, so we are given no clear idea of how the control of the proliferation and increasing power of AI could be achieved in practice. There is some rather fragmentary discussion of the idea that AIs could become conscious, and thus be said to have moral status. Bostrom remarks that a society of intricate and intelligent AIs, none of them conscious "would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland without children".

This work is no doubt an important contribution to serious thought on the subject of the increase in automation and ever more powerful computers and ingenious programming techniques. It is not, however, a book for the casual reader. -- John Harney.

1 comment:

Ross said...

Bostrom is a leading thinker in the so-called "transhumanism" movement.