Decide who lives and who dies. The Moral Machine

Test is deeply flawed because it's not a representation of robot cars. Bad assumption include:

  1. Knowing death will occur with certainty. A computer would never know with certainty that the occupants would die in a crash, for example.

  2. It's unlikely that robot cars will be distinguishing age, gender, etc. before becoming widespread.

  3. A robot car with this level of sophistication would never put itself in the situation where the choice would be running a green light walking intersection, or crashing.

  4. Robot cars probably wouldn't even categorize obstacles as living or not.

Etc.

A real robot car wouldn't have to make these moral choices, so the programming would be much, much simpler.

Strictly follow rules of the road, thus preventing 99.99% of driver error crashes. Otherwise, break if an unexpected obstacle of any type suddenly crosses the path, and hope for the best.

Tada! A driver that prevents vastly more deaths than human drivers, thus much more moral than using humans.

/r/Futurology Thread Link - moralmachine.mit.edu