A tougher Turing Test shows that computers still have virtually no common sense

Looking at the top comment, one has to consider what demonstrators and councilmen want. In thinking about computability, or whether or not a computer can perform a task, the first thing you should ask is "is it finite?" In other words, the things that demonstrators and councilmen want--can you write all those things down on a list? If not, and the list is endless, then a computer is not going to be able to look through all the possibilities and pick the one that makes sense.

In a conversation, if you tell a robot it's wrong, is the list of possible correct answers finite? Even if it is, it's probably a big list--shockingly big. Big enough that even a modern processor is going to take a lot of time to parse and that's going to be a lot longer than is acceptable in the normal course of a conversation. So if you want to teach a robot this way, it's not going to be very efficient or natural.

The best you can do is a statistical approach which considers, for instance, which characteristics of demonstrators and councilmen are most often referred to in specific contexts. As the conversation goes along, it becomes more and more clear which contexts are being referred to and thus the list of possible meanings narrows. It's not fair to take sentences out of context and feed them into a computer and expect correct interpretations. The computer must also be exposed to the same context we are--let it listen to the news for a week leading up to the test. Then ask it what councilmen and demonstrators want.

But then there is the question--what happens when new meanings appear? Are new ideas always combinations of old ideas? If not, then that opens up some interesting questions about the nature of computation and reality.

/r/technology Thread Parent Link - technologyreview.com