‎Linus holding the never-before-seen final design of the AMD RX Vega at LTX 2017

t I'm not really sure what you mean by "uses traditional mass production techniques for everything".

Traditional: standard PCB, standard BGA packaging for RAM, standard substrate for GPU. Non-traditional: die stacking, TSVs, lots of components on interposer

While it's true that HBM being a much newer technology is more expensive to make, it's mainly due to additional R&D and QA costs as opposed to a higher cost of materials, which seems to be a common misconception.

Sorry but that's BS. A GDDR uses a simple flip chip packaged die. It uses PCBs that are used by billions of products.

A single HBM stack requires a multiple dies that first thinned to ultra thin thickness. It requires growing TSVs. It requires that all those dies are mounted and bonded together. After that, it requires an additional very large piece of interposer silicon, and it requires that 3 or 5 dies are bonded into this interposer. And then this huge interposer needs to be bonded onto an ever larger substrate. And all of that must be done without warping to avoid mechanical stress etc.

Nobody cares about R&D cost: not AMD (it's a sunk cost), not the customer, not Wall Street. Everybody cares about production cost, the selling price and the gross margin.

HBM is way more expensive plain and simple.

If you use it, you better make it worthwhile. IOW: it must give you a huge advantage.

AMD has so far never managed to get that.

[HBM] will eventually come down in price and most likely become the main type of VRAM used for GPU's in the future.

HBM will always require a bunch of production steps that are not required with GDDR. It will never reach parity. And thanks to Nvidia's strategic thinking, by only using it for very low volume products, it will be a long time before it will get economy of scale. AMD, with low volumes, uses an expensive component for a consumer product. Nvidia does not.

It is the right call…for nVidia, a company that had nothing to do with HBM's creation or development and being such, they would have to pay the manufacturers asking price.

Sorry: BS. The idea that AMD will somehow have price advantage because they wrote the document is magical thinking.

However, what most people overlook is that AMD invented HBM (along with Hynix) and as a co-inventor along with the company that manufactures it (HBM1), they get both dibs on access to it (one reason why they used it first with the Fury) ...

And what a poisoned gift that was. They suckered themselves into using a way too advanced memory technology in a GPU that was too slow.

and I’m sure they get some type of incentive, either from licensing (although they have claimed otherwise) ...

Well, good to hear that you know better than them!

You don't understand how memory technology development works. There is no question that AMD played an important role, but in the end, they only designed the easy part: a GPU that's mounted on an interposer. Hynix did the heavy lifting: die stacking, TSVs, DRAM design.

It is true, however, that HBM uses less power than GDDR5 and with Vega using as much power as it does (understandable as it has 4096 sp’s), it is yet another reason why AMD chose to go with HBM.

They had a hammer and saw everything as a nail. They used brute force: "HBM will solve our power efficiency, our BW problems, and save die size. Oh, and let's add additional ROPs as well, because why not?"

The only redeeming feature of Fury X was that they got HBM to work at all. Too bad that the rest of the chip was frankly an embarrassing turd that didn't deserve the kind of memory it used. Vega seems to going the same way.

Nvidia said "why don't we use our brains instead: design an ultra power efficient core and area, develop really good memory BW compression, so we don't need that expensive hammer."

I mean, if you invented something that was better and used less power, and by using it yourself, you end up promoting it, which is beneficial to you because the faster it’s adapted, the faster the cost is reduced, you’d use it too. It really shouldn’t be seen as this “mind boggling stupid” decision, now or into the foreseeable future.

That only makes sense if the cost justifies an overwhelming advantages. AMD forgot that part and got completely outplayed by Nvidia.

AMD is on the right track, ...

What makes you so sure about that?

At least Fury X was roughly on part in the perf/mm2 department. Vega promises to fail hard on that. It's well on its way to be a regression on anything.

What doesn’t make too much sense, however, is just how much importance some people seem to place on power efficiency in a high-end enthusiast level desktop GPU.

If Vega consumes 300W and only manages a performance level somewhere between a 1080 and a 1080 Ti, where do they go to compete in the next generation? Nvidia can easily go up one step. AMD does not.

/r/Amd Thread Parent Link - imgur.com