
SK hynix Now Sampling 24GB HBM3 Stacks, Getting ready for Mass Manufacturing
When SK hynix initially announced its HBM3 memory portfolio in late 2021, the corporate mentioned it was growing each 8-Hello 16GB reminiscence stacks in addition to much more technically advanced 12-Hello 24GB reminiscence stacks. Now, nearly 18 months after that preliminary announcement, SK hynix has lastly begun sampling its 24GB HBM3 stacks to a number of clients, with an purpose in the direction of going into mass manufacturing and market availability within the second half of the yr. All of which needs to be a really welcome growth for SK hynix’s downstream clients, lots of whom are chomping on the bit for extra reminiscence capability to satisfy the wants of huge language fashions and different high-end computing makes use of.
Based mostly on the identical expertise as SK hynix’s current 16GB HBM3 reminiscence modules, the 24GB stacks are designed to additional enhance on the density of the general HBM3 reminiscence module by growing the variety of DRAM layers from 8 to 12 – including 50% extra layers for 50% extra capability. That is one thing that is been within the HBM specification for fairly a while, but it surely’s confirmed troublesome to drag off because it requires making the extraordinarily skinny DRAM dies in a stack even thinner to be able to squeeze extra in.
Commonplace HBM DRAM packages are sometimes 700 – 800 microns excessive (Samsung claims its 8-Hello and 12-Hello HBM2E are 720 microns high), and, ideally, that top must be maintained to ensure that these denser stacks to be bodily appropriate with current product designs, and to a lesser extent to keep away from towering over the processors they’re paired with. Consequently, to pack 12 reminiscence units into an ordinary KGSD, reminiscence producers should both shrink the thickness of every DRAM layer with out compromising efficiency or yield, scale back the house between layers, decrease the bottom layer, or introduce a mix of all three measures.
Whereas SK hynix’s newest press launch provides restricted particulars, the corporate has apparently gone for scaling down the DRAM dies and the house between them with an improved underfill materials. For the DRAM dies themselves, SK hynix has beforehand said that they have been capable of shave their die thickness right down to 30 microns. In the meantime, the improved underflow materials on their 12-Hello stacks is being offered by way of as a part of the corporate’s new Mass Reflow Molded Underfill (MR-MUF) packing expertise. This method entails bonding the DRAM dies collectively abruptly by way of the reflow course of, whereas concurrently filling the gaps between the dies with the underfill materials.
SK hynix calls their improved underfill materials “liquid Epoxy Molding Compound”, or “liquid EMC”, which replaces the older non conductive movie (NCF) utilized in older generations of HBM. Of specific curiosity right here, in addition to the thinner layers this enables, in accordance with SK hynix liquid EMC provides twice the thermal conductivity of NCF. Conserving the decrease layers of stacked chips moderately cool has been one of many greatest challenges with chip stacking expertise of all varieties, so doubling the thermal conductivity of their fill materials marks a big enchancment for SK hynix. It ought to go a great distance in the direction of making 12-Hello stacks extra viable by higher dissipating warmth from the well-buried lowest-level dies.
Meeting apart, the efficiency specs for SK hynix’s 24GB HBM3 stacks are an identical to their current 16GB stacks. Meaning a most information switch velocity of 6.4Gbps/pin operating over a 1024-bit interface, offering a complete bandwidth of 819.2 GB/s per stack.
Finally, all of the meeting difficulties with 12-Hello HBM3 stacks needs to be greater than justified by the advantages that the extra reminiscence capability brings. SK hynix’s main clients are already using 6+ HBM3 stacks on a single product to be able to ship the full bandwidth and reminiscence capacities they deem mandatory. A 50% enhance in reminiscence capability, in flip, can be a big boon to merchandise corresponding to GPUs and different types of AI accelerators, particularly as this period of huge language fashions has seen reminiscence capability turn out to be bottlenecking think about mannequin coaching. NVIDIA is already pushing the envelope on reminiscence capability with their H100 NVL – a specialised, 96GB H100 SKU that permits previously-reserved reminiscence – so it is easy to see how they might be keen to have the ability to present 120GB/144GB H100 components utilizing 24GB HBM3 stacks.
Supply: SK Hynix
#hynix #Sampling #24GB #HBM3 #Stacks #Getting ready #Mass #Manufacturing
No Comments