Thousands of experts, including Professor Steven Hawking, have signed a letter urging a ban on killer robots that could one day be used by dictators to carry out racist ethnic cleansing campaigns.
The scientists warned that the world is heading for a robot arms race where artificial killers will be so cheap to build that they will become the “Kalashnikovs of tomorrow,” with the technology being abused by terrorists and warlords to “perpetrate ethnic cleansing” by by carrying out tasks such as, “Assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”
The development of killer robots will reduce the need for humans to physically go to war, but they will also serve to lower the threshold for war since troops won’t need to be put in harms way.
The scientists also warned that if the technology is abused, it will make public acceptance of the positive aspects of artificial intelligence more difficult.
“Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits,” states the letter.
The experts reiterated their call for a complete ban on killer robots, writing, “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”
While the prospect of terror groups and rogue dictators getting their hands on the technology is real, we have to remember that our own governments pose the biggest threat in terms of potential mass mechanized slaughter and oppression.
Indeed, a 2008 Pentagon request for contractors called for the development of robots, “to search for and detect a non-cooperative human subject” during “pursuit/evasion scenarios,” directing that they have the ability to “intelligently and autonomously search”.
The New Scientist’s Paul Marks responded to the story by asking, “How long before we see packs of droids hunting down pesky demonstrators with paralysing weapons? Or could the packs even be lethally armed?”
That concept moved a step closer earlier this year when Boston Dynamics, now owned by Google, unveiled a new robot named Spot that is far more agile than its larger predecessor, Big Dog, and can run around at high speed both outside and indoors.
Steve Wright, an expert on police and military technologies from Leeds Metropolitan University, also warned that such robots would eventually be armed.
“The giveaway here is the phrase ‘a non-cooperative human subject’,” said Wright.
“What we have here are the beginnings of something designed to enable robots to hunt down humans like a pack of dogs. Once the software is perfected we can reasonably anticipate that they will become autonomous and become armed.”
Robotics Professor Noel Sharkey has also repeatedly asserted that the robots being developed by Boston Dynamics will eventually be used for crowd control and to hunt down and kill people.
“If you have an autonomous robot then it’s going to make decisions who to kill, when to kill and where to kill them,” Sharkey told the Alex Jones Show. “The scary thing is that the reason this has to happen is because of mission complexity and also so that when there’s a problem with communications you can send a robot in with no communication and it will decide who to kill, and that is really worrying to me.”
The ethical considerations attached to killer robots also represent a huge risk. A robot is less likely to value collateral damage when making the decision to kill, while responsibility for the deaths of innocents will also be diffused since no human would be directly to blame for the bloodshed.