AMD's memory patent outlining a 'new, improved RAM' made from DDR5 memory isn't a new development — HB-DIMMs already superseded, probably won't come to market
A recent AMD patent is making the rounds, but it's just a continuation of earlier work.

A recent AMD patent, US19/201,497 titled "High-bandwidth memory module architecture," is making the rounds again, with posts like this one on Reddit prompting all kinds of speculation about the "new" HB-DIMM memory technology that AMD is preparing. However, AMD is probably not preparing anything new; Instead, the new patent is actually a continuation of an older one that dates to 2022. The technology the company outlines has already been superseded by the new MRDIMM tech that's already shipping and supported by AMD.
The original patent, known as US12300346B2 and also titled "High-bandwidth memory module architecture," was superseded in 2023 by US20230178121A1, which is the specific document extended by the "new" filing that was actually published back in July. In other words, by no means is this a novel technology, and AMD's recent activity with these patents is almost assuredly a form of "bureaucratic housekeeping" — paper shuffling intended to help protect AMD's intellectual property.
So, what are these patents actually about? HB-DIMMs, a new type of memory module that performs multiplexed accesses over "two or more independently addressable pseudo-channels" within a single memory module. These pseudo-channels are not necessarily analogous to memory ranks, but instead separate divisions within the module, and they can be within a single rank or across ranks.
By doing this, you can double the effective transfer rate of standard DDR5 DRAM, although you will need new modules with extra components, including additional data buffers and an RCD, or Register Clock Driver. This means, in essence, that HB-DIMMs would essentially supersede both standard RDIMMs and CUDIMMs.
Multiple companies independently developed this idea; SK hynix pursued it in collaboration with Intel and Renesas, presenting a proposal called MCR-DIMM in late 2022. That was after AMD had already filed its patent on HB-DIMM, although the filing wasn't public at that time. It wouldn't do to have two competing, incompatible standards, so JEDEC got together with the two to standardize what we now have as MRDIMMs, or Multiplexed-Rank Dual Inline Memory Modules. MRDIMMs consolidate the ideas of MCR-DIMM and HB-DIMM into one standard form.
MRDIMMs aren't a hypothetical future technology. They've already been available on the market for upwards of a year now, as they're supported in Intel's Xeon 6 family of server CPUs (formerly Granite Rapids). Phoronix recently tested the performance gains against standard DDR5 RDIMMs at 6400 Mbps, and while the overall gains were small, certain memory-intensive workloads like the High Performance Conjugate Gradient (HPCG) found significant gains, while memory latency actually saw a tiny, margin-of-error improvement.
Does this recent patent filing indicate that AMD will still pursue HB-DIMM? Probably not. Fundamentally, the technologies are very similar, and AMD has already voiced its intent to support JEDEC's MRDIMM open standard. The company's Zen 6-based EPYC 'Venice' processors are expected to use MRDIMMs — potentially second-generation MRDIMMs — to reach the lofty 1.6 TB/second per-socket memory bandwidth spec that Dr. Lisa Su teased at AMD's Advancing AI event in June.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
If that was indeed meant to be a single-socket bandwidth number, then with the rumored sixteen-channel (1024-bit) memory interface of 'Venice', a transfer rate of 12,800 Mbps would get you directly to 1.6 TB/second. That happens to be exactly the transfer rate that JEDEC promised for second-generation MRDIMMs, so there's the source for our speculation.
Those hot-clocked DDR5 modules won't come cheap, though. Existing MRDIMMs are already extremely expensive, with a cost-per-gigabyte that is between 28% and 114% higher than standard, slower-clocked DDR5 RDIMMs, depending on the specific comparison being made. We're looking at retail pricing, which usually isn't what server operators pay, but the point is, you're looking at easily $100 to $150 extra per module, which is brutal when you need to fill eight, ten, twelve, or sixteen memory channels in a single server. The second-gen 12,800 Mbps stuff will likely be even more.
So, in summary: No, AMD (probably) isn't about to whip out some new high-speed memory technology. This stuff has been talked about for a few years already, and in fact, we actually covered HB-DIMM and MRDIMMs right here on this very website in the past. Here's hoping this technology could filter down to consumer systems, particularly if those rumors about AMD's new chiplet APUs are true.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!

Zak is a freelance contributor to Tom's Hardware with decades of PC benchmarking experience who has also written for HotHardware and The Tech Report. A modern-day Renaissance man, he may not be an expert on anything, but he knows just a little about nearly everything.
-
bit_user
I thought that Intel's "MR" DIMMs were basically MCR-DIMMs and that they did an end-run around the JEDEC standardization process for MR-DIMMs.The article said:Multiple companies independently developed this idea; SK hynix pursued it in collaboration with Intel and Renesas, presenting a proposal called MCR-DIMM in late 2022. That was after AMD had already filed its patent on HB-DIMM, although the filing wasn't public at that time. It wouldn't do to have two competing, incompatible standards, so JEDEC got together with the two to standardize what we now have as MRDIMMs, or Multiplexed-Rank Dual Inline Memory Modules. MRDIMMs consolidate the ideas of MCR-DIMM and HB-DIMM into one standard form.
The GeoMean improved by 8.3%, which is not something I think most datacenter operators would consider small. I think < ~2% would be small.The article said:Phoronix recently tested the performance gains against standard DDR5 RDIMMs at 6400 Mbps and while the overall gains were small
However, power consumption increased 15.1%. So, if you tweaked turbo and power limits to run at about equal power, then I'd bet the performance gains might indeed drop to the 2% ballpark.
Something else to note about Phoronix' tests is that he runs one instance of the program, unless it's something like compiling, which is fundamentally multi-process. So, he does a better job in characterizing developer or HPC workloads, but misses a lot of datacenter scenarios that involve lots of VMs. In the many-VM scenarios, you basically have bunches of independent workloads and kernels, which scales much more linearly and might benefit even more from additional memory bandwidth.
Yeah, MR-DIMMs were discussed over the past years and Intel's aggressive adoption of similar tech gave Granite Rapids one axis on which it surpassed the Turin (Zen 5) EPYCs. So, it was abundantly clear AMD would have to answer Intel by doing similar.The article said:So, in summary: No, AMD (probably) isn't about to whip out some new high-speed memory technology. This stuff has been talked about for a few years already,
Incidentally, I think the move to 16 channels is probably as much about capacity as it is about bandwidth. I fully expect they'll be dropping 2DPC as something they even nominally support, in the next generation of EPYCs. Without that, the only way to add more capacity is by adding more channels (or more DRAM per DIMM).
And on the client front, I fully expect Zen 6 to offer native support for CU-DIMM. In moving up to a maximum spec of 24 cores per CPU (2x 12-core CCDs), they'll sure need a jump in bandwidth. AMD has previously claimed bandwidth constraints were the main reason why Ryzen hasn't gone above 16 cores. I can dig up a citation on this, if anyone really wants it.