- SPHBM4 significantly reduces the number of pins while maintaining hyperscale-class bandwidth performance.
- Organic substrates reduce packaging costs and ease routing constraints in HBM designs.
- Serialization moves complexity down to the silicon layers of signaling and core logic.
High-bandwidth memory has evolved around extremely wide parallel interfaces, and these design choices have imposed constraints on both performance and cost.
The HBM3 uses 1024 pins, which already expands the capabilities of dense silicon adapters and advanced packages.
The Solid State Technology Association JEDEC is developing an alternative known as Standard Package High Bandwidth Memory 4 (SPHBM4), which reduces the physical interface width while maintaining overall bandwidth.
HBM4 interface duplicates HBM3
The standard HBM4 specification doubles the width of the HBM3 interface to 2048 pins, with digital signals passing through each pin to increase overall throughput.
This scaling approach improves throughput, but also increases routing complexity, substrate requirements, and manufacturing costs, which is a concern for system designers.
The planned SPHBM4 device uses 512 pins and uses 4:1 serialization, operating at a higher signaling rate.
In terms of throughput, one SPHBM4 pin is expected to carry the equivalent workload of four HBM4 pins.
This approach shifts the complexity from pin counting to signaling technology and core logic design.
Reducing the number of contacts allows for greater distance between contacts, which directly impacts packaging options.
JEDEC says this free pitch of the protrusions allows connection to organic substrates rather than silicon pads.
Silicon wafers support very high interconnect densities with pitches greater than 10 micrometers, while organic substrates typically operate closer to 20 micrometers and are cheaper to manufacture.
Thus, the intermediate block connecting the memory stack, its underlying logic die and the accelerator will move from a silicon-based design to an organic substrate design.
HBM4 and SPHBM4 devices are expected to offer the same memory capacity per stack, at least at the specification level.
However, mounting on an organic substrate makes it possible to increase the length of the channels between the accelerator and memory stacks.
This configuration may allow more SPHBM4 stacks per package, which may increase the overall memory footprint compared to conventional HBM4 layouts.
Achieving this result requires a redesigned base logic die, as SPHBM4 memory stacks have a fourfold reduction in pin count compared to HBM4.
HBM is not general purpose memory and is not intended for consumer systems.
Its areas of use continue to be concentrated in artificial intelligence accelerators, high-performance computing and GPUs V data centers controlled by hyperscalers.
These customers operate at scale where memory bandwidth directly impacts revenue efficiency, justifying continued investment in expensive memory technologies.
SPHBM4 does not change this usage model because it retains HBM-class throughput and capacity while optimizing system-level cost structures that are important primarily for hyperscale deployments.
Despite mentions of lower cost, SPHBM4 does not point the way to the consumer segment. RAM markets.
Even when using organic substrates, SPHBM4 remains a multi-level memory with a dedicated core logic die and tight coupling to accelerators.
These specifications are not consistent with consumer DIMM-based memory architectures, pricing expectations, or motherboard designs.
Any cost reductions are applied within the HBM ecosystem itself, rather than across the broader memory market.
However, for SPHBM4 to become a viable standard, support from major vendors is needed.
“JEDEC members are actively shaping the standards that will define the next generation of modules for use in artificial intelligence data centers…” said Mian Quddus, Chairman of the JEDEC Board of Directors.
Major suppliers, including Micron, Samsungand SK Hynix are members of JEDEC and are already developing HBM4E technologies.
“Our #NuLink D2D/D2M #interconnect solution has demonstrated the ability to achieve 4TB/s throughput in a standard package, which is almost double the throughput required by… the HBM4 standard, so we look forward to leveraging the work JEDEC has done with SPHBM4…,” said Eliyan, a core logic semiconductor chip company.
By using Blocks and files
Follow TechRadar on Google News. And add us as your preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the “Subscribe” button!
And of course you can also Follow TechRadar on TikTok for news, reviews, unboxing videos and get regular updates from us on whatsapp too much.






