Whatever the status of Singularity as a media event, premonition radiates from it in a cascade. Hollywood’s recent Johnny Depp vehicle, Transcendence, has already stimulated a wave of response, including commentary by Steven Hawking (who knows a thing or two about the popularization of scientific topics). An article in a major newspaper by Hawking has brought the downstream chatter to a new level of animation. (My Twitter feed can’t have been the only one to be clogged to bursting point by it.)
Hawking’s argument, pitched lucidly to a general audience, is that AI is plausible, already to some considerable extent demonstrated, susceptible in theory to radical cybernetic amplification (‘intelligence explosion‘), quite possibly calamitous for the human species, and yet to be socially engaged with appropriate seriousness. As he concedes “it’s tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history.”
Explosive dynamics are already evident in the AI development trajectory, which is undergoing acceleration, driven by “an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.”
Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge [here] called a “singularity” and Johnny Depp’s movie character calls “transcendence”.
Hawking employs his media platform to make the case that something should be done:
Success in creating AI would be the biggest event in human history. […] Unfortunately, it might also be the last, unless we learn how to avoid the risks. […] Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.
As its prospect condenses, Technological Singularity is already operative as a cultural influence, and thus a causal factor in the social process. At this stage, however, as Hawking notes, it is still a comparatively limited one. What would be the implications of it coming to matter far more?
Socio-historical cybernetics is compelled to ask: would an incandescent Singularity problem function as an inhibitor, or would it further excite the developments under consideration? It’s certainly hard to imagine a sophisticated pre-emptive response to the emergence of Artificial Intelligence that wouldn’t channel additional resources towards elite technicians working in the area of advanced synthetic cognition, even before the near-inevitable capture of regulatory institutions by the industries they target.
Institutional responses to computer hacking have been characterized by strategically ambiguous ‘poacher turned gamekeeper’ recruitment exercises, and some close analog of such poaching games would be an unavoidable part of any attempt to control the development of machine cognition. Playing extremely complicated betrayal games against virtual super-intelligence could be a lot of fun, for a while …
ADDED: The FHI’s Daniel Dewey is pulled in.