The crack artificial intelligence creeps in through is an unpatchable coordination problem. That gets ever easier to see:
Stephen Hawking deftly framed the issue when he wrote that, in the short term, A.I.’s impact depends on who controls it; in the long term, it depends on whether it can be controlled at all. … One obvious example is autonomous killing machines. More than 50 nations are developing battlefield robots. The most sought-after will be robots that make the “kill decision” — the decision to target and kill someone — without a human in the loop. Research into autonomous battlefield robots and drones is richly funded today in many nations, including the United States, the United Kingdom, Germany, China, India, Russia and Israel. These weapons aren’t prohibited by international law, but even if they were, it’s doubtful they’ll conform to international humanitarian law or even laws governing armed conflict. How will they tell friend from foe? Combatant from civilian? Who will be held accountable? That these questions go unanswered as the development of autonomous killing machines turns into an unacknowledged arms race shows how ethically fraught the situation is.