In science fiction, autonomous robots on a killing spree are almost 100 years old, dating back to R.U.R., a 1920 stage play by the Czech writer Karel Capek. Indeed, the word "robot" is derived from the Czech robota, meaning work. As a real-life horror, however, self-directed robots willing and able to kill human beings are still in the future.
But not, perhaps, too far in the future. The numerous versions and variants of these menacing machines, from 1950s pulp magazines to the Terminator and Transformer films, have fuelled deep concern in some quarters. Now the British scholar Noel Sharkey and the advocacy group Human Rights Watch are campaigning against "killer robots". Their 49-page report, Losing Humanity, calls for strict controls on robot weapons. Frankly, there are more pressing problems to worry about.
To be sure, how to control artificial intelligence and machines with some semblance of judgement is not a new concern. As far back as the 1940s, science-fiction writer Isaac Asimov's first "law of robotics" began "A robot may not injure a human being."
Today missile-armed drone aircraft, although operated remotely by human controllers, do evoke the notion of machines raining down death. But even the report's authors concede that autonomous killer robots are "20 to 30 years" away. Humans are still our own worst enemies.