Robots should be fitted with an “ethical black box” to keep track of their decisions and enable them to explain their actions when accidents happen, researchers say.

Sure but there are lots of issues with that, the biggest probably being how do you "debug" something like that? How do you tell if the AI was just missing inputs, there was a bug in the algorithm, some sort of unexpected runtime error, or was it actually the correct decision (as counter intuitive as it might appear to humans) even if it didn't end up working out.

Humans do use a lot of intuition to make decisions, but they are also often required to justify or explain their thought process to other humans (especially in institutional settings, i.e. Doctors, engineers, CEOs, etc. and especially when something goes wrong (malpractice, fraud, etc.)). But programs have trouble getting something close to human readable even for regular program's stack trace or logging, much less a machine learning algorithm.

And that's before we even begin assigning responsibility (is it the developer's fault? The programmer's? The user's?)

If the goal of AI is to make human-like intellegence, then while yes, there will be times that it might have to resort of "it just felt like the right thing to do", in general it should be able to explain it's reasoning the same way humans normally are able to do.

/r/technology Thread Parent Link - theguardian.com