Autonomous vehicles present ‘morally dangerous path’
Paul Joseph Watson
May 14, 2014
As the United Nations debates legislation that could outlaw ‘killer robots’, scientists predict that artificially intelligent systems could one day decide to kill humans “for the greater good.”
In an article for Popular Science, Erik Sofge outlines a scenario whereby robot cars would decide to sacrifice their human owner in order to prevent a collision that could kill more people.
A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.
Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn’t be simpler.
Sofge cites an opinion piece by Patrick Lin, an associate philosophy professor and director of the Ethics + Emerging Sciences Group at California Polytechnic State University, in which Lin delves into the “legally and morally dangerous paths” presented by the emergence of robotic vehicles.
Aside from Google cars and other autonomous forms of transportation, the question of giving robot soldiers a license to kill has raged for years and is currently the subject of a highly anticipated debate in New York being overseen by the U.N.’s Convention on Certain Conventional Weapons.
The UN body’s role in banning blinding laser weapons for battlefield use in the 1990’s has led to speculation that legislation banning the use of drone soldiers could be in the works.
“All too often international law only responds to atrocities and suffering once it has happened,” said Michael Moeller, acting head of the U.N.’s European headquarters in Geneva. “You have the opportunity to take preemptive action and ensure that the ultimate decision to end life remains firmly under human control.”
Last year, award-winning military writer and former intelligence officer Lt. Col. Douglas Pryer penned an essay warning of the threat posed by remorseless “killer robots” that will be used to stalk and slaughter human targets in the near future.
Pryer’s comments echo those of Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield, who has repeatedly warned that the robots currently being developed under the auspices of DARPA will eventually be used to kill.
In a 50-page report published in 2012, Human Rights Watch also warned that artificially intelligent robots let loose on the battlefield would inevitably commit war crimes.
Michael Cahill, a law professor and vice dean at Brooklyn Law School, welcomed the idea of autonomous robots with the power to make life or death decisions on behalf of humans, but acknowledged that such a society could resemble a science fiction nightmare.
“The beauty of robots is that they don’t have relationships to anybody,” stated Cahill, adding, “They can make decisions that are better for everyone. But if you lived in that world, where robots made all the decisions, you might think it’s a dystopia.”