Samsung has made a spectacular return to Nvidia's good graces, breaking a streak of failures in the high bandwidth memory segment. According to the latest reports from Korea, the company has successfully passed all verification stages and will be one of the first suppliers of HBM4 modules for the upcoming AI architecture named Vera Rubin. This is a huge success for the giant, which just a few months ago was struggling with technological issues and investor skepticism after their previous solutions were rejected by Jensen Huang.
Record Speeds
The key to success turned out to be parameters that go beyond current market standards. The new HBM4 memory from Samsung offers an astonishing data transfer speed exceeding 11 Gbps per pin, which was a key requirement from Nvidia for the new generation of artificial intelligence systems. Samsung leveraged its unique advantage by manufacturing key logic components in its own factories using a 4nm process, allowing for better optimization and ensuring stable supplies without the need to rely on external subcontractors.
First delivery faster than you think
The schedule for implementing new technology is extremely dynamic and promises a hot summer in the tech industry. The first batches of HBM4 memory are set to reach Nvidia as early as next month, and ready servers based on Vera Rubin chips will begin shipping to the largest data centres in the world as soon as August 2026. The full power of this new collaboration is set to be officially showcased at the GTC conference, where Samsung and Nvidia will jointly demonstrate how the new memory bandwidth will impact the development of the most advanced AI models.
Will this success for Samsung completely change the situation in the memory market? Probably not. But it is a glimmer of hope for the entire industry and a potential solution to the memory availability issues for new AI-based chips.
Katarzyna Petru












