Why has CPU progress slowed to a crawl?

(programmer here) The workload allocation problem in ELI5: Let say you want to bake a cake. Pick up the required materials Get the right amount of each (eg: weight/volume...) Mix them Put it into a pan Put the pan into the oven Make it look nice with the icing How would it look like in parallel: (the processes would start as soon as a core is available for work so the actual order would vary from time to time) Mix them -> Error need measured materials Make it look nice with the icing -> Error, icing on the table Put the pan into the oven -> Successful Put it into the pan -> Error, nothing to put there Pick up the required materials -> Successful Get the right amount of each -> Successful And as you can see the kitchen is covered in icing, the materials are laying in the cups unmixed and you have a hot pan. The problem with this is, that automatized solutions could only cover a small part of this: Mixing needs measured materials. Measured materials needs picked up materials. Putting the pan into the oven needs nothing (but a pan). Putting the mixed stuff into the pan needs a pan(lets say we always have it). Icing it needs something in the pan (about that far we would go with the automatized solution :/ ) Picking up the required materials needs nothing (lets say we have them) This would lead to: The pan would be in the oven from start (empty). The icing could be poured on top of the unbaked mixture. The mixture would be half cooked, because the timer may start from putting the pan into the oven, not from the mixture's arrival. So, at the end of the day, no you would not like it about 1 out of 6 times (haven't done calculations, please don't throw rocks at me); About the "CPU capabilities part". That is a really fishy area. In reality nobody likes (at industry level) the low level coding, because it can cause more problems than it solves. (Some companies making money out of it, because they are willing to risk some bugs for the performance). Low level coding would be required for some nasty performance and precision boosts. AMD and Intel always racing for more capabilities, they are called "Instruction set[1] ". These instructions are highly optimized, physically integrated solutions for problems like: mathematical functions, memory management for complex data exchanges. To implement these instructions and use them in a robust way, you need both: competent compilers, optimizable codes. Programmers tend to know less and less about this aspect, but it also needs a corporate background where your bosses will adapt to new technologies. (there were many examples in the past, where the leadership blocked the C to CPP(C++) conversion and that lead to catastrophic solutions which later condemned the future for them) Also worth noting: programmers are not omniscient guys. They need to learn and practice till death (no joke). About every year, there are at least 2 major changes in information technology. (some examples: cloud, ssd, multicore cpu, directx x, windows x, database changes, website/webapplication changes with .Net changes(or other libraries). These changes are so big, that the documentation is usually more than it could be learned and practiced well from start to end until the next big stuff hasn't come out yet... If there is a good leadership who understands the backbones of this system (programming and computer tech in general) and they have the money, then you can see mind blowing solutions. But in reality, there is usually only one of them is available at any time. //somewhat offtopic example Mind blowing thing for example: Blender3D FREE opensource 3d modelling software. The fantastic solution here, is that they have implemented a GPGPU renderer. Why is it that big of a thing? Lets count: 1 CPU -> 8 cores, 4.5 ghz -> 8x4.5ghz = 36ghz 1 GPU (nvidia titan or 980) -> ~1500 cores, 1.2ghz -> 1800 ghz (this calculation is not precise, but it can give some kind of understanding) So when we had to render a trailer on CPU, it took about 221 hours. On GPU it would be ~5 hours. This is just a software, which you can compile with your own compiler at any time so it will always use every instruction sets. (not to mention the gpu calculation part, which is hardly implemented this well by other softwares) //end of example If programmers would have much more time, then we could develop some fancy things :)

/r/askscience Thread