Why Self-Driving Cars Must Be Programmed to Kill | MIT Technology Review

I feel like a few of the top comments here fail to grapple with just how bad the consequences can be of trying to avoid answering ethical questions when it comes to robotics.

If an autonomous car acts predictably (say, it never swerves off the road, ever) that might make it easier to act accordingly in the event that a car crash occurs on slow motion, but in a real emergency situation, pedestrians and other motorists will only have a split second to react; often not enough time to ensure one's own safety anyway -- unless you're capable of making millions of calculations per second, like a robot would be.

Plus, the ethical consequences can be staggering if we hold to position; in the event that there are a group of young children on the road and a single adult on the sidewalk, I think most people agree that protecting the children at the cost of one adult could potentially an acceptable (if still awful) outcome in such a situation, yet a few here are pretending to know nothing about the basic moral sense of protecting children.

It's basically just the Trolley Problem; something people have been thinking about for years now, but for whatever reason, people who should know better are pretending there are no right or wrong answers to these questions as matter of principle (rather than questions that might just be functionally intractable in practice).

Whatever answer you think is correct, surely we can all agree that a position of moral relativism is unlikely to yield and interesting or informative results (especially when the implications of such a position can mean making cars that indifferently mow down groups of children in favor of 'consistent behaviour').

/r/scifi Thread Link - technologyreview.com