By its nature, the Open Roboethics Initiative is easy to dismiss — until you read anything they’ve published. As we head toward a self-driving future in which virtually all of us will spend some portion of the day with our lives in the hands of a piece of autonomous software, it ought to be clear that robot morality is anything but academic. Should your car kill the child on the street, or the one in your passenger seat? Even if we can master such calculus and make it morally simple, we will do so only in time to watch a flood of household robots enter the market and create a host of much more vexing problems. There’s nothing frivolous about it — robot ethics is the most important philosophical issue of our time.

Many readers are probably familiar with the following moral quandary, which is not specifically associated with robotics: A train is headed for, and will definitely kill, five helpless people, and you have access to a lever that will change its track and direct it away from the five — and over another, lone victim instead. A grislier version asks you decide whether to push a single very large person in front of the train to bring it to a wet, disgusting halt, which makes it impossible to deny culpability for the single death, which is the crux of the moral problem. Obviously, five dead people is worse than one dead person, but you didn’t set the train moving in the first place, and pulling the lever (or pushing that poor dude) will insert you as a directly responsible actor in whatever outcome arises. This begs the question: If it’s possible for you to pull the lever in time, would your inaction also constitute direct responsibility for the five deaths that result?

In normal, human life this stumper can be set aside quite well with the following argument: “Whatever.” That really isn’t as insensitive as it might seem, since a) The situation will almost certainly never actually arise, and b) We are not inherently responsible for anyone else’s actions. This means that the question of whether to kill the five or the one is ultimately academic, since any single person who actually makes the “wrong” decision in a real life crisis will do so with zero moral implication on the rest of us. So unless we get really unlucky and happen to actually be that guy who stumbles on a train-lever situation, it’s ultimately someone else’s problem. The impossibility of perfecting human behavior means we have no moral imperative to try to do so.

Read more


NEWSLETTER SIGN UP

Get the latest breaking news & specials from Alex Jones and the Infowars Crew.

Related Articles


Comments