There’s a Damn Good Chance AI Will Destroy Humanity, Researchers Say

that great intelligence will seek to do good.

That depends entirely on the parameters it was programmed with and requires the AI to be incapable of changing those parameters on its own.

What is good or bad in a strictly logical mathematical sense based on efficiency?

Things can be good or bad depending on the situation. Social scientists, religions, and philosophers have written entire libraries about these things, without having an absolute universal agreement.
But once you put things into software, it boils down to a 0 or a 1.
Is an action good or not?
And depending on the parameters/task, the result may not be something humanity is gonna like.

/r/Futurology Thread Parent Link - popularmechanics.com