What do you think of Superintelligence: Paths, Dangers, Strategies?

Ok, I haven't read the book, so maybe all those things are adressed already. Maybe (probably) there are easy and convincing arguments against them. But here it goes:

First I guess, that any usefull general ai has to incorporate value judgements and I think as a result almost by neccessity, it will end up with something resembling what we might call a human worldview. It might not have the same worldview or the same values, but if it values life, it would care to some extent about all life. If it then deems us to unimportant in the big scheme of things, I guess that's fair game and you go little ai and conquer the universe or whatever it is you do.

Second, let's assume we have made a general ai and we give it access to every science paper ever written. What would the result be? There is a lot of conflicting information. A lot of false information. Could a superintelligent ai make sense of it and then go on to expand on it, like cure cancer or figure out the beginning of our universe or the inside of black holes? I doubt it. I guess the only field, where an ai might be able to make indefinite progress on it's own is mathematics. However, that might be pretty useless progress overall. Could even an superintelligent ai make sense of philosophical problems and solve them once and for all? I don't think so. There are problems that simply have no answer. Another example would be things like predicting the weather. I would argue, that even an superintelligent ai couldn't predict the weather with certainty over longer periods of time. It might become better than our best models running on our best supercomputers, but in the end it would be limited by the sensors on the ground, unless it finds some magical solution to predict weather otherwise. Could a superintelligent ai beat the stock market? Probably. It would certainly find some opportunities for arbitrage, it would be faster in judging news and their impact, but it wouldn't be perfect, even with a perfect understanding of human psychology. It couldn't possibly predict the future with enough certainty. Too many unknown variables. So there are some limits, even for a superintelligent ai.

A general or even a superintelligent ai would at least in the beginning still have to rely on us helping it to further enhance it's possibilities. It might propose great experiments or medical studies. It might invent new production methods, new materials and provide us with all kinds of technical solutions to our problems. Again, in order to do that in any usefull way, it would have to deeply understand us. Maybe it does all that just to further it's own progress and once it doesn't rely on us any more, simply will get rid of us or ignore us. But then again, it would arguably be a better or at least more advanced version of us, so I say fair game.

Another more esoteric thought I had lately is the very nature of intelligence. What is it actually? It seems to me, that our intelligence heavily relies on the findings in the past. We aren't born intelligent, we learn to be intelligent. Every new generation growns up with newer and (arguably) better information than the last and expands on it. Is that really intelligence or is it in a collective sense just some random form of trial and error or some kind of evolution? Can a superintelligent ai overcome that?

/r/samharris Thread