Make it Stop II

Autonomous Weapons: an Open Letter from AI & Robotics Researchers (with huge list of signatories):

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

This is an important document, that is bound to be influential. If the orchestrated collective action of the human species could in fact stop a militaristic AI arms race, however, it could stop anything. There’s not much sign of that. Global coordination in the direction of explicit political objectives is inaccessible. The process is already “beyond meaningful human control”.

Arms races — due to their powerful positive feedback — are the way threshold events happen. Almost certainly, the terrestrial installation of advanced machine intelligence will be another instance of this general rule. Granted, it’s not an easy topic to be realistic about.

(‘Make it Stop’ I, was devoted to the same futile hope.)

ADDED: At The Verge (with video).

19 thoughts on “Make it Stop II

  1. Oh come on. Robots still cannot drive cars. Driverless cars are the wave of the future and always will be. Still less can we have robot warriors.

    Yes, we can, and should, have remote controlled assassin drones that can handle the occasional interruption in communications with their controller without crashing and burning. But autonomous? We are as far from that as we always have been.

    • “We are as far from that as we always have been.” — That has to be hyperbole, unless you think there’s a mystical reason for the impossibility of synthetic intelligence.

      • I don’t think artificial consciousness is impossible in principle. It is just that we have absolutely no idea how to do it, and we are not getting any closer to having any idea how to do it.. Self driving cars rely on an extremely detailed map of the world that is pre annotated by human beings. When stuff happens on the roads that is not covered by their map, they cannot recognize what they are seeing, and become confused.

        • With the right (auto-catalytic) genetic algorithm, we no more need to know how to do it than nature knew how to do us.

          In other words — as Moravec has always said — it’s a hardware problem. Once enough cycles can be diverted into groping about in the dark, it becomes inevitable.

          • “Genetic algorithm” is just an admission one has sinned, abandoned hope and entered gates of computational hell.

            Bostrom, on “argument from evolutionary algorithms”:

            “The computing resources to match historical numbers of neurons in straightforward simulation of biological evolution on Earth are severely out of reach, even if Moore’s law continues for a century. The argument from evolutionary algorithms depends crucially on the magnitude of efficiency gains from clever search, with perhaps as many as thirty orders of magnitude required.”

            Note the generous heuristic discounts of selecting for familiar form of intelligence that are factored in. Groping about in spaces of alien intelligence would be even more hopeless)


          • Bostrom is talking about a fairly fine-grained simulation of the entire biosphere, which is a perversely round-about route to take. Far more economical to populate search space with algorithmic primitives, with the intensity of evolutionary dynamics (variation-selection cycles) massively dialed up.

          • Not really. Bostrom is talking about evolutionary simulation of neural system, not the whole biosphere (ie brain in a vat). Not too bad: all estimations of computational complexity of evolving intelligence are groping in the dark. Doesn’t mean that groping in the dark will find anything resembling a light switch.

            “Algorithmic primitives” would suggest some kind of evolutionary inductive logic programming. Sure, if one chooses primitives very carefully and monkeys around with evolutionary parametric (a lot), interesting things may evolve:


            But every attempt at meta-generalizing this (evolutionary computation optimizing evolutionary computation) is just self-immolation by combinatorial explosion.

            The depressing truth of AI is that there’s always a man behind the curtain, choosing the right building blocks and twirling all sorts of knobs the until the house seemingly builds itself.

    • Is intelligence even a requirement for autonomy ? Dumb computers beat smart humans at chess (and Go) without any hand-holding. Artificial neural networks aren’t even an approximation of biological ones, but surpass humans at image recognition once trained. We may be nearing the point where autonomous war robot becomes an engineering problem.

      Building blocks are here: classical AI to outgame humans, neuromorphics to outpattern them. Add some new material science for robotic muscles and other effectors and humans stand no chance: too fragile, expensive and slow.

      One has to look at the whole assemblage. Even relatively small quantitative advances in computing power or one smart idea can make humans obsolete fast (deep learning on GPU as example of former, Monte Carlo tree search as example of latter).

      Another example: marry latest computer vision with old AI planning. Offspring may put lots of people out of work: an assembly robot which learns how to put things together from videos. Or cook …

  2. don’t think such machines need to be “smart” to be dangerous just powerful (look at machine-trading on wallstreet), as for researchers not wanting to be part of such projects all that’s absurd if you build it they will use it, grow up people…

  3. It will be hard to cauterize open head wound caused by stupidity of this open letter, but this short story by Peter Watts might do:

    “An ethically-infallible machine ought not to be the goal. Our goal should be to design a machine that performs better than humans do on the battlefield, particularly with respect to reducing unlawful behaviour or war crimes.” – Lin et al, 2008: Autonomous Military Robotics: Risk, Ethics, and Design

    “[Collateral] damage is not unlawful so long as it is not excessive in light of the overall military advantage anticipated from the attack.” – US Department of Defence, 2009

    SPOILER: It would not recognize itself in a mirror.

  4. Why even bother with consciousness? Why not asignifying machines that bypass linguistic structures that support the whole division of representation to begin with? Hell code is itself asignifying to begin with, procedural or object-based. Why would AI need to be anything like human modes of thinking, which aren’t that productive to begin with compared to even the basic statistical systems used in stock markets at the moment. This whole idea that AI would need to be conscious is absurd. AI will take another path entirely. Self-reflexive mirroring is not that path. I would thin the study of insects would be more productive in building AI. Biomechanics and the study of insects will more than likely be the better path toward an actual intelligence of use. The whole point is to first produce a sensorium that allows environmental detection and puzzle solving abilities. One does not need perception to do this. With advanced machinic processes new senses could be established, a new heretical empiricism (Pasolini) non-signifying and anti-representational. Think of just one thing: sonar – bats and porpoises. So many other natural processes that by pass consciousness as we know it or use it. We seem to be bound to this old tomb of consciousness as if it were a fetish doll. It’s just excess baggage, a minor bit player left over from our days chasing gazelle. Environmental hijinks. Then our accountants developed language and all the trouble began.

    • Well, killer robots (10 years down the road) certainly won’t need consciousness to kill humans and break things much more efficiently than humans and human-controlled machinery. Then again, a nuke does it even better.

      But one needs conscious humans to design them and there is no definitive evidence that consciousness is excessive, just conjecture (and Zeitgeist).

      There’s a lot of noise on statistical AI and its effectiveness. Deep neural networks will ‘learn’ to classify textual entities if you feed them millions of examples *. A child will only need a few examples to understand ** a new concept. Doesn’t this show that even a baby intelligence is vastly more effective (and is far beyond what current AI can do or even imagine realistically ?).

      And I don’t think ’embodied cognition’ explains intelligence. Does it explain anything at all ?

      ** Note how slanted the language of the above article is. Entity classification is not ‘understanding’. It seems that old AI bullshit of assigning ‘knowledge’ to symbols and mathematical models is alive and well. Seems like a persistent blind spot in human metacognition.

      • As I said: Why does it need any form of human modes of intelligence? Language is a social intervention into the human, many child psychologists in the past 60 years or so have shown how babies start as a-signifying systems that work through non-representational forms of exploratory negotiation of self and environment. AI does not need human linguistic systems: this is an imposition onto machines – an intervention that is erroneous. Again, Why produce consciousness when you don’t need it? Even Nietzsche suggested that long ago. AI doesn’t need to be self-reflexive – that’s been erroneous pursuit for years. Look at most other predatory species: they do just fine without all these symbolic forms. I rest my case. Build swarms of insects: give them a central control through distributed intelligence and learning algorithms, memory and operative revisioning (working on memory and environment: compare, collate, adapt) rather than self-reflexivity.

        • Doesn’t some form of self-reflexive intelligence comes before language, with body mapping ? For example, land animals with rigid joints have centralised body mapping, octopus seems to use something distributed to avoid entangling itself. But in both cases, it results in reflexive and adaptable behaviour. I really don’t think self-reflexivity is an optional feature.

          Sure, meaning of ‘reflexivity’ or ‘language’ depends on how ‘intelligence’ is embodied but robotic subsumptive architectures are too behaviouristic. They put all the important stuff in the black box, then ignore it.

          On second thought, that’s exactly the core problem of AI: our cognition (from very base level to symbolic one) is a black box to us (could be argued, even a Chinese Room) .

          • It’s not self-reflexive, which implies a mirroring of first-person singular against a representational construct, image-copy etc. All animals have a certain amount of environmental reflexivity: interactive relational systemic modes, but they are not the same as self-reflexivity in the sense of knowing that one knows. Or hearing that one hears as in Shakespeare’s Edmund who overhear’s his own arguments and thereby overcomes his nihilism by actually self-reflecting on his own voice, realizing thereby that the other does indeed exist separate from my self, therefore “I, Edmund, love.” etc. It’s really not at all difficult to understand this. One doesn’t need a hell of a bunch of jargon based scientific or philosophical research to prove it. That’s all bunko crapology for self-important lackeys of academia. Animals do not have this at all. They have no sense of beliefs, intentions or otherwise… pure reactive systems based on physical needs that drive them to act on those necessities. AI could operate the same as animals who learn. Animals do display memory. It is memory that is the key not consciousness per se. Animals reflect on memory not some sense of internal mirror self-reflection. Quite different systemic systems. Nothing magic. Think of Elephants that will travel thousands of miles to return to their burial grounds. Whales to their birthing grounds. Intelligence and cognition is not limited to the human species. We are animals as well. We just happened to develop linguistic structures that also helped us created fantasy worlds we term Culture to defend us against our own animal or inhuman core. We are the idiots who have forgotten the secret of our own natural being. It was just a mistake, a really bad mistake.

        • Let’s simplify. Are you saying that animals don’t have metacognition ? That metacognition is a mistake, a liability on the battlefield ?

          Some ants pass the mirror test (clean themselves when they observe speck of dirt on their image in the mirror). How do you know they have no beliefs, intentions or sense of self ?

          TBT, your writing reminds me of Descartes conception of animals as automata, without language, intention and self-knowledge. Except that (opposed to Descartes) you seem to see this as a good thing …

          FWIW, “There is an emerging scientific consensus that animals share functional parallels with humans’ conscious metacognition”. Empirical studies are not hard to find.

          Success of dumb swarms depend on economies of scale. Lem’s scenario from the Invincible is not at all Inevitable. For example, why wouldn’t metacognitive intelligent machines use the cloaking trick at the end of the novel ? And solar city machines, wouldn’t they be better off than swarms, less energy needed, with more computational and manufacturing capability and flexibility (because metacognition) ?

          • ‘I give up’ is certainly a sensible response in a fight between reinforcement learning reductionist camp and a metacognitive one. Reductionists claim that metacognitivists have no model and that experimental data can be explained with their models. Metacognitivists shoot back that models don’t explain/match the data and accuse reductionists that they reify their models.

            Just a random example of recent salvo:

            Animal Metacognition: A Tale of Two Comparative Psychologies


            My answer, if I had one, would be: it’s much too complicated for a definitive answer. I certainly don’t see any semantic apocalypse on the horizon.

Leave a Reply