Where the deficit in compute units will hurt most: ray tracing.

No it doesn't. 2 percentage drop brings it around 2.18-2.20 GHz.

He said a few percentage points. You're just making up numbers out of thin air. And this is his super optimistic take as a marketer. You don't honestly believe that it will be at boost clock the majority of the time. I know you aren't that dumb.

Cerny mentioned that both CPU and GPU will be running at its max capacity for most of the time and if required would need a few percentage point drop to achieve the power target. It's the reason why they kept it variable so that they can hit as high GPU clock as possible for most of the time and reduce it whenever required. You are just assuming without any data.

I'm not assuming anything. DF covered this. 2.23 GHz is literally right at the limit for this architecture. They bumped it up just so they can say they have double digit teraflops. If the GPU runs at boost clock, the CPU will be running at a lower frequency, causing performance issues. That's just the way it works.

TF numbers do not factor in rasterization advantage. Each compute unit is working at higher efficiency. There is no data to suggest what you are claiming. TF difference between 10 and 12 won't even be that big of a deal. If lower spec was around 2-3 I would understand what you are claiming.

DF covered this as well. They outlined the fact that individual compute units in RDNA 2.0 are much more efficient than in previous architectures. So the advantage that these consoles have over previous gen consoles is much more than the TF number suggests. Similarly, because XSX has a significant advantage in CU count, it has a massive advantage in compute power over PS5, and we know that ray tracing will be heavily dependent on CU.

/r/PS5 Thread Parent