News
Mar 12, 2026
News
Enterprise
Artificial Intelligence
Asia
NewDecoded
3 min read

Image by AMD
AMD and Samsung Electronics have officially expanded their strategic partnership to advance next-generation artificial intelligence computing. The companies signed a Memorandum of Understanding at Samsung's Pyeongtaek facility to secure high-performance memory for upcoming hardware. This agreement prioritizes Samsung as the primary supplier of HBM4 for the AMD Instinct MI455X GPU.
The technological centerpiece of this deal is Samsung's sixth-generation HBM4 memory. Built on a 10nm-class process with a 4nm logic base die, this memory achieves data transfer speeds of 13 Gbps. It offers a staggering 3.3 terabytes per second of bandwidth to meet the intense demands of modern AI workloads. Integrating this memory into the AMD Instinct MI455X provides a significant hardware advantage. The GPU features a 12-stack configuration for a total of 432GB of HBM4 capacity. This massive local memory footprint is designed to eliminate common bottlenecks in AI model training and inference.
Beyond GPUs, the collaboration extends to the data center server market. Samsung will supply high-performance DDR5 solutions for the 6th Gen AMD EPYC processors, known by the codename Venice. These components will power the AMD Helios platform, a rack-scale architecture built for massive scalability.
The partnership also opens doors for future foundry services. Samsung and AMD are discussing opportunities for Samsung to provide advanced manufacturing for next-generation AMD products. This potential move would diversify AMD's supply chain beyond its current manufacturing partners.
Global infrastructure builders are already preparing to deploy these systems. Companies like HPE and Broadcom are integrating the Helios architecture into their networking stacks. Initial shipments of the MI455X and related hardware are currently on schedule for the second half of 2026. For more information, visit www.amd.com and the Samsung Newsroom.
This alliance represents a decisive move by AMD to secure its AI supply chain against the market dominance held by Nvidia. By locking in Samsung as a primary HBM4 partner, AMD effectively circumvents the supply constraints seen with other major memory manufacturers. The emphasis on providing 432GB of memory capacity suggests a specific strategy to capture the AI inference market where local memory size is paramount. This integration of silicon and system architecture positions AMD as a high-capacity, open alternative to proprietary ecosystems in the sovereign AI and enterprise sectors.
Related Articles