Elon Musk's OpenAI will teach AI to talk using Reddit

computers cannot tell the difference between reliable data and horrible trolls

I'm not so sure. This assumes there is no logic to understanding if something is a troll post or not.

Put differently, there are some people who may not understand a joke, while others do. The difference between the two people is ultimately data. Maybe its an in-joke where you need to know some backstory to it, or by the same reasoning related to current events. Maybe its a play on words that requires understanding a language well enough to understand the similarities between words.

Troll, or to put another way 'unwanted' posts likely have some logic to them as well. These are often even written out as reddit and sub-reddit rules. These rules, at least the well thought out one, did not spring up out of nothing, they came about from users taking actions that did not facilitate the main goals of that section of the site. With enough of these actions (data), mods were able to create guidelines (rules) on how to filter out posts. Sometimes, mods upholding these rules fail as the rules are interpreted poorly, or are poorly constructed, and thus are re-engineered and re-taught.

In society we generally do the same until we can establish / flag what is inappropriate for communicating in a situation. Often times out model of what is inappropriate is flawed , and under developed (like Microsoft's twitter bot) and we make an ass of our selves.

However, if we don't speak over a microphone to random people, but passively observe and only test our model with people who wont take offense, we can ultimately establish a model (or realistically many different models for different situations) that will work. A potentially horribly dull environment for a human, but perfectly fine for an AI.

Computers are dumb, we put things in, and they give things back.

AI written by humans is a bit smarter, we tell it some rules to use, it will use these rules. Often times these rules are poor, un-optimized, or simply flawed, and with enough data we can slowly change these rules to make a better AI. This pace will likely never be quick enough and flexible enough to satisfy humans in more debated aspects of life (censorship, companionship, arts), but for more mundane topics it seems to be doing an okay job. (referencing web pages for search engines, a semi-predictable AI in a game)

AI written by AI, or 'deep learning' AI can create branched simulations to test how rules might play out, or (being an machine) find examples of a situation of a rule failing in a specific way in its vast library of data. It can create rules that fail at such a quick pace that eventually it should establish rules that do not. Clearly we aren't there yet, otherwise there would be in the news, but there is no trouble in quietly failing until it is.

/r/technology Thread Parent Link - engt.co