Lee Sedol reacts to AlphaGo's strange move.

By AlphaGo's method of evaluating moves, this move is preferable to its other options. But that method is not particularly humanlike at all, and it's situations like this that demonstrate it. To a human, this move is clearly nonsense.

But to AlphaGo, this move provides a tiny glimmer of hope: Lee Sedol might ignore it! We know that would never happen, but the software doesn't understand things like that. It doesn't know anything about other players, or how they think, or even what kind of moves people do and don't make. It doesn't even remember the moves Lee Sedol has already made. All it sees is the board as it currently exists, and it sees that this move has some variations that work out for it, and most other moves don't.

The fact that these variations rely on Lee Sedol making an insane mistake doesn't enter into it. So the software plays the move.

(This is a bit of an oversimplification. The software does try to assume that the opponent will make the best move available to it. But AlphaGo's notion of "best" is a bit funny. If you watched the earlier games, you might have noticed that it seemed to make strange moves when ahead, too. This is another manifestation of the same basic problem: It can't read out every variation, so it reads out a random sample and checks which percentage of those variations stemming from a single move appear to lead to a win. When those percentages are very small or very large, the statistical significance of its random sample drops, and its moves become imprecise because it can't reliably distinguish between good and bad moves -- the outcome of the game is unlikely to change regardless. And this imprecision applies not only to its own moves but also to the moves it imagines its opponent might make. And so it does odd things.)

/r/baduk Thread Parent Link - youtube.com