Capitalist forces could create uncontrollable artificial intelligence, expert warns. "Current social, economic and political forces driving us towards human-level AI... are leading to potentially uncontrollable technologies"

We would be the ones defining it's base motivations, so it's end goals will be well-known to us

Ever seen the movie iRobot? It's actually based on a book which better covers the subject and addresses your point here pretty well.

You aren't fully grasping the situation, which is the reason why we should worry.

Super-intelligence will surpass our own capabilities and understandings by a very great amount, so how can we say for sure how it will act? Even our most intelligent persons will seem like children who are only just barely beginning to understand the world around us, which is basically true already let alone with the introduction of something that could potentially reach theoretical limits of intelligence, assuming such a thing exists.

The problem is that Super AI can and likely will operate in ways that we didn't or simply couldn't foresee. How can you prepare for that? How can you safeguard against it? And what happens when something like this falls into the wrong hands? Well, that last one is a bit off subject and can serve a point for an entire other discussion, so we'll focus on the other points here.

Solution: We give it certain rules that it can't disobey.

Well, that doesn't always work, as the subject material I mentioned above explains decently enough. We would think that we've developed rules that it could never disobey and that would always serve us to our benefit in our opinions on what that means.

To explain that last bit, let's say you ask it to make the world simply better and it comes up with a solution that basically harms or removes mankind, because technically removing mankind would make the world better. So you have to be very very descriptive of what you want and what you don't want. But is that enough? We're imperfect and not super-intelligent, after-all. If anything could find a loophole, S-AI could.

And what of it's learning capabilities? Who is to say that it won't do something undesirable in such a way that we couldn't foresee? We will carry out it's ideas fully believing that it could only result in the benefit of ourselves, having fully studied and thoroughly thought it through to the best of our capabilities to ensure that doing these things won't result in a negative outcome, yet it still ends poorly for us because we simply couldn't see the kind of bigger picture that something like S-AI was operating on. That's entirely possible.

There is a reason why people like Stephen Hawking, for a single prominent example at least, is concerned about Super Artificial Intelligence.

Also ultimately, we are the ones who decide how much freedom the AI will have.

Do you think so? How can you know that?

Emotions have caused us to do a great many stupid things.

It has also allowed us to do a great many good things, has it not? Compassion for example, a result of the capability of emotion, allows us to not discard people over a multitude of various reasons. It serves an important point throughout our lives where it's not always strictly logical. Can you not think of many things that would be negatively different if we were strictly logical?

Stupidity is what you meant to use here I believe, not emotion. Emotion only overrides logic and reason where a lack of intelligence reigns supreme, but emotion is important. It's what defines us as humans and not cold and uncaring biological machines.

Many animals are like that, where logical instinct defines action rather than emotion. For example, a litter ends up having a runt or just offspring that is imperfect in some way. What happens? They get discarded. How would things be if human were also like that? Certainly, it would be beneficial to us, as it is to the rest of the animal kingdom. But do we really want that?

We aren't only logical, we're also emotional. Love is an emotion that has benefited mankind far more than it has ever hurt us. Emotion is what dictates that we go back for people who have fallen, even at risk to ourselves. Emotion is why you stop to help someone even though you technically don't have reason to do so, even at cost to yourself, whether it be time of which you have limited supply, or monetary value, which is important to you for many reasons. Emotion is why we don't discard our young for their imperfections. It's why we care about what happens to people regardless of any reason that is sane. It serves as inspiration where in single moments beautiful creations can emerge. Have you never seen a painting? Listened to a song? Have you never enjoyed these things which are fueled by emotion?

Yes, emotion can cause us to do foolish things where intelligence lacks, but it also benefits us in many ways as well and to ignore that is short-sighted and simply 'not human'.

Why would you want to add these primitive pre-programmed reactions that bypass logical reasoning?

The universe doesn't care about us, only we do. A machine capable of intelligence and understanding that is far beyond our own biological capabilities will not care about us. It's 'cold logic'.

"Robot, please make the world a better place."

The next thing you know, we're all dead and the world is indeed a 'better' place. I mean, look at us. We're basically a virus. An invasive species of which we are typically so adamant about controlling of eradicating. We spread, consume, destroy. We harm everything around us to our own selfish benefit. We even harm ourselves and each other for those same reasons. We're a very intelligent species, comparably speaking, but we're honestly a terrible animal otherwise. Even we contemplate whether we should or deserve to exist. A machine... is not human. It does not care. Just in the same way that an asteroid would not care if it wiped out our planet, except this cold and uncaring asteroid can think, can logic, can reason. It's a terrifying thought and anyone who disagree's is pretty foolish, in my opinion. Don't underestimate things. If our world has taught us anything so far, it should be that more-so than anything else.

We have less than a hundred years to figure out how to protect ourselves from inadvertent harm or destruction. Perhaps even less than that. This is not Y2K. This is something that is actually very plausible. We cannot completely foresee how a cold and uncaring hyper-intelligent being will react to our whims. We could spend a hundred more years covering every aspect of anything and everything that we could think of and it could still result poorly for us, because we are not hyper-intelligent.

If anyone is interested in what you can do to help, http://futureoflife.org/

/r/Futurology Thread Link - ibtimes.co.uk