ELI5 Why technology is still stuck on 64bits?

Moving up in bits is actually bad. We don't want to do it unless we have to. In some sense, it means that every low level action you take involves twice as much work, which you'd rather apply to doing two operations instead of one. It doesn't exactly work out that way in all cases, but that's broadly the idea. Going from 32 bits to 64 bits means that single instructions now need to deal with twice as much stuff. If anything, we really want to go the other way if we can, but 32 bits was causing us to hit some critical walls. You can only deal with numbers up to about 4 billion natively, and we need to be able to use numbers to count up the memory addresses we have, so it was also making it difficult to design systems with more than ~4 GB of memory. Plus, there are a decent number of use cases besides memory address where we really just want to be able to deal with numbers bigger than 4 billion easily. Those things are inconvenient to deal with on 32 bit systems, so everyone has been migrating up to 64 as a pretty universal mainstream standard.

Now we now can represent numbers billions of times larger. ~4 billion x ~4 billion. That's in the quintillions. There aren't a lot of use cases for dealing with numbers that large. For memory, now we can address billions of GB, which is at the scale of being able to address something like all memory for all personal computer on earth. It's plenty for a mainstream modern computer system standard.

/r/explainlikeimfive Thread