In the absence of a universal standard for built-in, pre-collision ethics, superhuman cars could start to resemble supervillains, aiming for the elderly driver rather than the younger investment banker—the latter's family could potentially sue for considerably more lost wages. Or, less ghoulishly, the vehicle's designers could pick targets based solely on make and model of car. “Don’t steer towards the Lexus," says Cahill. “If you have to hit something, you could program it hit a cheaper car, since the driver is more likely to have less money.”
The greater good scenario is looking better and better. In fact, I’d argue that from a legal, moral, and ethical standpoint, it’s the only viable option. It’s terrifying to think that your robot chauffeur might not have your back, and that it would, without a moment’s hesitation, choose to launch you off that cliff. Or weirder still, concoct a plan among its fellow, networked bots, swerving your car into the path of a speeding truck, to deflect it away from a school bus. But if the robots develop that degree of power over life and death, shouldn’t they have to wield it responsibly?
“That’s one way to look at it, that the beauty of robots is that they don’t have relationships to anybody. They can make decisions that are better for everyone,” says Cahill. “But if you lived in that world, where robots made all the decisions, you might think it’s a dystopia."
Health Care Systems Oncology, Imaging and Pharmacology, particularly for Prostate Cancer. Technology that interests me: Sensors (Radar, Sonar, EO/IR,Fusion) Communications, Satellites, Unmanned Vehicles (UAV), Information Technology, Intelligent Transportation
Thursday, May 15, 2014
Ethics for Self Guided Autos
The Mathematics of Murder: Should a Robot Sacrifice Your Life to Save Two? | Popular Science
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment