Nvidia’s roadmap reveals a strategic acceleration in its AI chip development, with two major announcements that could reshape the competitive landscape for AI hardware. The introduction of Blackwell Ultra chips later this year and the Vera Rubin architecture planned for 2026 signals the company’s determination to maintain its dominant position in the AI chip market that has driven its sixfold sales increase since ChatGPT’s release in late 2022.
The big picture: Nvidia is doubling down on its AI chip supremacy with a rapid-fire development timeline stretching through 2028, targeting cloud providers who have become its most lucrative customers.
- The company unveiled Blackwell Ultra chips shipping in the second half of 2025, offering enhanced token generation capabilities crucial for premium AI services.
- CEO Jensen Huang also revealed the Vera Rubin architecture expected in 2026, promising twice the speed of current chips and supporting up to 288 gigabytes of fast memory.
Key details: The Blackwell Ultra chip family is positioned as a premium offering that could generate substantially more revenue than previous generations.
- Available in two configurations—GB300 (with Arm CPU) and B300 (GPU only)—these chips can produce more tokens per second, a critical metric for generative AI performance.
- Nvidia claims these chips have the potential to generate up to 50 times the revenue compared to the previous Hopper generation.
Behind the numbers: Vera Rubin represents a significant technological leap forward with a two-part system consisting of a custom CPU called Vera and a GPU design called Rubin.
- The Vera CPU will be twice as fast as the previous Grace Blackwell chips, addressing computational bottlenecks.
- During inference operations—when AI models respond to queries—the system can manage 50 petaflops, more than double the capability of current Blackwell chips.
Where we go from here: Nvidia has mapped out an aggressive development timeline extending several years into the future.
- Following the 2026 Vera Rubin release, the company plans to launch Rubin Next in the second half of 2027, which will combine four dies to double Rubin’s speed.
- The roadmap extends to 2028 with a next-generation architecture codenamed Feynman, maintaining Nvidia’s strategy of regular performance improvements to justify continued heavy investment from cloud providers.
Nvidia announces Blackwell Ultra and Vera Rubin AI chips