A Global Arms Race to Create a Superintelligent AI is Looming

The nightmare scenario is this, phase 1 we teach a computer to learn, kind of like the below articles:

Computer Discovers Laws of motion in a day

Computer teaches itself how to recognize cats

The last is amusing because it also learned to recognize human faces, and tools with a handle offset by 30 degrees, things like spatulas.

These are both limited examples, the complexity of what they can learn will increase in time, as will the speed at which they learn it. The spatula is a great example of the sheer randomness we can get after we teach them to learn.

with that solid foundation we start treading into deeper more speculative waters (which futurist such as the author of the article treat as a sure thing), one of these learning computers may learn to improve itself. The idea of improving code isn't exactly revolutionary, I do it for a living, so a computer learning where inefficiencies are and optimizing them to make it's code better at a certain task is reasonable expectation for future software.

So The newly improved software can recognize new areas for improvement, and starts a recursive cycle of improvement. Now we are getting into the area where I think this whole singularity idea goes off of the rails, each iteration of improvement gets faster and better at improving it self, until double doublings make it something far beyond human capabilities.

I think this is a questionable idea at best because of the law of diminishing returns, the dumb early AI gets the low hanging fruit, the easy area's to improve, and the successively smarter AI are generally forced to go after more difficult and less impactful areas of improvement. If the work gets harder it throws off this whole exponential growth thing that the singularity depends on.

As you mentioned it will also have to live within it's confines, processing ability, memory, power consumption, etc. It will optimize its self for those confines, but I doubt a seed AI on an Apple II E is going to be a threat to the world.

To go further down the singularity rabbit hole, the self improving AI recognizes that it can no longer improve itself in it's current living quarters, and has to make a leap to either additional systems or systems of it's own design, or both. So it either bridges the air gap or is granted extra access, which it uses to make itself even better, and continues making more and more extreme modifications to itself. At this point it becomes better than humans could ever be at optimizing itself. Humans try and stop it (because it's giant solar collector is blocking sunlight from getting to north america), it out thinks us, and eliminates us as a threat to it's growth. I'll add more after I sleep.

/r/Futurology Thread Link - motherboard.vice.com