Tuesday, May 19

HBM explained: Can stacked memory give AMD the edge it needs?

At AMD's Financial Analyst Day earlier this month (which was actually more interesting than it initially sounds), AMD finally confirmed that it was looking to use high-bandwidth memory (HBM) in an upcoming high-end GPU product. Unfortunately, the company gave away few specifics, other than that HBM uses a form of 3D stacked memory, and that it'll (of course) vastly increase performance while still reducing power consumption.

Stacked memory itself isn't an entirely new technology, but AMD's implementation—which gives its GPUs access to much more memory bandwidth—is a big step forward for a graphics card market that's rapidly approaching the limits of GDDR5. With Nvidia also looking to incorporate a form of HBM in its 2016 Pascal architecture, you're going to be hearing a lot more about this new memory technology over the coming year.

Why do we need HBM?

A suitable replacement for the hard-working, but ageing synchronous dynamic random-access memory (SDRAM) standard has been a long time coming. While the current DDR3 memory standards—as well as offshoots like GDDR5—have been serving the CPU and GPU well, they're starting to show signs of being based on early-'90s technology. Essentially, each revision of SDRAM makes use of the same double data rate (DDR) principle as the original technology, which syncs memory to a system bus (allowing it to queue up one process while waiting for another), and also transfers data on both the rise and fall of the clock signal in order to work twice as fast.

Read 21 remaining paragraphs | Comments

No comments:

Post a Comment