Kamis, 03 April 2014

The Problem With Self-Driving Cars: They Don't Cry

Sure we can make a self-driving car, but can we make a self-driving car with feelings?

Noah Goodall, a University of Virginia scientist, asks that question in a new study of autonomous driving. Goodall (no doubt a big fan of the “Terminator” movies) isn’t so much worried about driving as he is crashing–can robot cars be taught to make empathetic, moral decisions when an accident is imminent and unavoidable?

It’s a heady but valid question. Consider a bus swerving into oncoming traffic. A human driver may react differently than a sentient car, for example, if she noticed the vehicle was full of school kids. Another person may swerve differently than a robot-driver in order to prioritize the safety of a spouse in the passenger seat.

This stuff is far more complicated than calibrating safe following distances or even braking for a loose soccer ball. Goodall writes: “there is no obvious way to effectively encode complex human morals in software.”

According to Goodall, the best options for car-builders are “deontology,” an ethical approach in which the car is programmed to adhere to a fixed set of rules, or “consequentialism,” where it is set to maximize some benefit–say driver safety over vehicle damage. But those approaches are problematic too. A car operating in those frameworks may choose a collision path based on how much the vehicles around it are worth or how high their safety ratings are–which hardly seems fair.  And should cars be programmed to save their own passengers at the expense of greater damage to those in other vehicles?

In a crash situation human drivers are processing a staggering amount of information in fractions of a second. The computer is doing the same thing, but much faster and its decisions are effectively already made–set months or years earlier when the vehicle was programmed. It just has to process, it doesn’t have to think.

The apparent middle-ground is a kind of hybrid model where the car does the driving and a human can intervene and override the autonomy in a sticky situation. Though, Goodall points out that drivers on auto-pilot may not be as vigilant as they should be–particularly coming generations who may learn to drive in sentient cars.

Goodall’s main point is that engineers better start thinking about this stuff, because crashes will be unavoidable even with perfectly functioning robot chauffeurs. In addition to fine-tuning radar systems and steering, the self-driving wizards at places like Google (GOOG) should be working on “ethical crashing algorithms” and artificial intelligence software in which self-driving cars learn from human feedback.

He also recommends that engineers and lawyers put their heads together to come up with some kind of standard. The current policies from the National Highway Traffic Safety Administration don’t drift into ethics at all.

As for automakers, it’s easy to envision Goodall’s ideas informing a whole new set of programmable driving modes: “D+” for protecting the driver at all costs, “P” for saving pregnant passengers and “S,” for selfless decision-making.

Free Phone Sex