We should be scared of people using AI for evil, not AI taking over

How do you propose we contain a greater than human intelligence?

One suggestion I've come across in reading is to contain it in a thick metal faraday cage(in an attempt to keep it from being able to utilize it's superior understanding of electromagnetism to manipulate it's own electronic capabilities and commandeer a mobile device via radio waves and escape that way), far (kilometers) underground with no access to the internet, and perhaps even going as far limiting it's responses to human questions to either a yes or a no (in order to keep it from using its super-empathy/intelligence from manipulating the human captors). This is just a theory, as for my own ideas, as far as existentials are concerned, I think we as a species would be wisest to not bring such an entity into existence.. As I don't think we'd be able to contain it indefinitely. Though otherwise I am naturally very curious.

Also I see this attitude a lot on this subreddit, but the first synthetic intelligence will probably not be greater than human. We barely understand our own conciseness we aren't going to jump from non sapient AI to greater than human intelligence immediately.

Within this body of thought is discussion on whether we would be most likely to experience a slow moderate or fast takeoff, or decades, months/years or days/minutes respectively. This is important to discuss because it may be the factor on which our response/survival hinges.

I personally feel like once the intelligence gained a capacity to improve itself by rewriting its own code and was able to learn in a dynamic capacity takeoff would be quick. Part of why I think this is best left for you to mull over considering these factoids:

  • Axons carry action potentials at speeds of 120 m/s or less, whereas electronic processing cores can communicate optically at the speed of light (300,000,000 m/s).

  • The human brain has somewhat fewer than 100 billion neurons. 23 Humans have about three and a half times the brain size of chimpanzees. By contrast, computer hardware is indefinitely scalable up to very high physical limits. Supercomputers can be warehouse-sized or larger, with additional remote capacity added via high-speed cables.

  • Human working memory is able to hold no more than some four or five chunks of information at any given time. While it would be misleading to compare the size of human working memory directly with the amount of RAM in a digital computer, it is clear that the hardware advantages of digital intelligences will make it possible for them to have larger working memories. This might enable such minds to intuitively grasp complex relationships that humans can only fumblingly handle via plodding calculation.

So, considering these points an AU would inherently be able to think hundreds of thousands of times faster than we are able to, consider hundreds or even billions of points of view when mulling over one thought and would have essentially infinite memory. As our AI utilize their ability for learning more and more, the crossover point from sub human to superhuman may be quicker than we'd be able to notice.

/r/singularity Thread