Algorithms: AI’s creepy control must be open to inspection - The accountability of artificial intelligence systems, from Facebook to healthcare, is shaping up to be a hot topic in 2017

My apologies, but I still don't see your core concern.

Programmers expose interfaces, rather than implementations themselves, so future programmers using their work can easily surmise that code's purpose.

Choosing what to expose and what not to is a big part of what allows code to expose its purpose, as you put it. This also allows the implementation to be maintained without changing the way that code is accessed. That way, (for example) library updates can happen without making changes to the related API, which allows a library's author to repair or improve their code without requiring all programmers using that code to rewrite the work that depends upon it.

What you seem to call bad programming is required to satisfy your conditions for good programming. Can you see why I'm confused?

We may have an ambiguity issue here. When speaking from a programmer's perspective, I say "user" to mean another programmer using an interface to someone else's work. We should say "end user" to mean a person using a completed application.

The end user simply does not need to know anything about the implementation at all. After all, the people who scrutinize code for issues are not end users. All the end user needs to be concerned with is application behavior.

In a neural network, it wouldn't make sense for a user to be aware of all inputs, because distributing data signals between neurons is automated. Were it not automated, then the neural network would lose one of the key features that makes it useful; much like a cross-tip screwdriver worn down to the point that it's an icepick. It's no longer useful for its apparent purpose.

While developing and testing a neural network, the user can see inputs when necessary to test. They need only provide instructions to output that data. Even then, it's only useful to do that when automated unit tests won't do the job. Generally, the more the machine can do for you, the more efficient your work becomes.

For the end user, input is already exposed. When I strike a key, I usually know which one was pressed. That, or I soon find out when I see a typographical error. More on the nose, if I select an image for DeepDream to process, I know which image I provided for input. Why would an end user need their input echoed back to them? That's already assumed in any context where it's necessary, and it's not specific to neural networks.

So, from the perspective of the original developer, the user, and the end user, I just can't see why you'd be concerned about exposing input at all. As a user, the API gives you all the exposure you need, otherwise you should use another API (or read the docs/manpages!).

/r/Futurology Thread Parent Link - theguardian.com