Humans (including Reddit users) suck at predicting AI evolution, which suggests it'll get here sooner than anyone expects

You can predict the arrival at AGI by looking at hardware trends, it's a reasonable assumption that we're not going to get human-like intelligence until we have hardware that can simulate the connectome of the human brain, that is about 100 billion neurons and 100 trillion synapses. The AI used for AlphaGo / DOTA is probably using only 10-50K neurons, so of course it has no hope of resembling human intelligence no matter how you operate it.

One way to predict AGI, which I don't prefer, would be to look at von Neumann architectures -- neural models based on deep learning methods -- to extrapolate the number of FLOPS required for a brain. Some estimates put it at around 1000 exaflops, and if Moore's Law were unstoppable we'd get there in supercomputers by 2040. The issue with this of course is that Moore's Law is slowing down, we're talking about hardware that may never exist.

Instead, I prefer to look at non-von Neumann architectures -- neuromorphic computers -- which are better optimized for the brain-simulation task, using 1000x less power to perform the same task. The key thing about neuromorphic chips is that they are so low power, each neuron signalling at perhaps 10 KHz, you can easily scale them up and connect them 3-dimensionally by stacking them on top of each other, which in effect will allow Moore's Law to continue as far as brain-simulation is concerned.

Intel's Loihi has 130K neurons and 130M synapses, assume we only need to get those numbers up to match the human brain in a 3D-connected block of chips. If you scaled the die size up to the reticle limit on today's 14nm processes you would have a chip with 1.7M neurons. Technology roadmaps out to 2025 are clear down to 3nm, and it's thought 2nm will be feasible without resorting to exotic processes that could take decades to develop like carbon nanotubes / graphene nanoribbon FETs. So if we look at 2nm, and assume a logic scaling of a little over half for each of the 5 node steps down to 2nm, a single chip would have about 34M neurons. Now we scale it up in 3D, and note that chip stacking processes (up to 2 chips) have already been developed. Assume that the power requirements really are 1000x lower, so we can manufacture a block of 1000 chips and operate it at around 300W. That gives our block 34B neurons, we're almost there. Lastly, you would expect a supercomputer would connect maybe 10 of these blocks together which is a lot of chips, and now at 340 billion neurons we are beyond the human brain and it's reasonable to assume AGI should be feasible in some mode of operation around this time.

So assuming all this tech is developed in parallel -- neuromorphic chips, 3D stacking, transistor scaling -- I think AGI by 2030 is reasonable.

/r/Futurology Thread