Samsung has made a spectacular return to Nvidia's good graces, breaking a streak of failures in the high bandwidth memory segment. According to the latest reports from Korea, the company has successfully passed all verification stages and will be one of the first suppliers of HBM4 modules for the upcoming AI architecture named Vera Rubin. This is a significant achievement for the giant, which just a few months ago was struggling with technological issues and investor skepticism after their previous solutions were rejected by Jensen Huang.
Record Speeds
The key to success turned out to be parameters that go beyond current market standards. The new HBM4 memory from Samsung offers an astounding data transfer speed exceeding 11 Gbps per pin, which was a crucial requirement from Nvidia for the new generation of artificial intelligence systems. Samsung leveraged its unique advantage by producing key logic components in its own factories using a 4nm process, enabling better optimisation and a guarantee of stable supply without relying on external subcontractors.
First delivery faster than you think
The schedule for the implementation of new technology is incredibly dynamic and promises a hot summer in the tech industry. The first batches of HBM4 memory are expected to reach Nvidia as early as next month, and finished servers based on Vera Rubin chips will begin shipping to the largest data centres in the world as early as August 2026. The full power of this new collaboration is set to be officially presented during the GTC conference, where Samsung and Nvidia will jointly showcase how the new memory bandwidth will influence the development of the most advanced AI models.
Will Samsung's success completely change the situation in the memory market? Probably not. But it is a glimmer of hope for the entire industry and a potential solution to the issues surrounding memory availability for new AI-based chips.
Katarzyna Petru












