Nvidia continues to dominate the AI accelerator GPU market, but AMD has been making moves that are drawing serious industry attention. Both companies are reportedly adjusting last-minute specs for their upcoming AI chips—especially around power consumption, which is climbing higher to match performance demands.
According to Semianalysis, AMD’s upcoming Instinct MI450X GPU was initially expected to ship with a TGP (Total Graphics Power) of 2,300W, but that figure may now increase to 2,500W. Nvidia, on the other hand, has also raised the target for its Vera Rubin–based VR200 GPU from 1,800W to 2,300W.
The timing suggests that Nvidia’s shift may be a direct response to AMD’s aggressive specs. In the AI hardware race, a boost in energy consumption usually signals an expansion in compute performance and memory bandwidth beyond the original design.
Memory Bandwidth Jumps Ahead
For Nvidia’s Vera Rubin GPUs, reports indicate a bandwidth increase from 13 TB/s to 20 TB/s per GPU. This next-gen architecture is expected in 2026, with an “Ultra” variant planned later. Both will leverage HBM4 memory, which is still in development.
Meanwhile, AMD is preparing to roll out its Instinct MI400 series in 2025, also using HBM4. Rumors suggest each GPU will support up to 432 GB of memory, a significant jump compared to Nvidia’s ~290 GB, though both are expected to deliver similar bandwidth levels, close to 20 TB/s.
AMD has already chipped away at Intel’s CPU market share and narrowed the gap with Nvidia in gaming GPUs through the Radeon RX 9000 series. Now, it looks set on carving out a larger piece of the AI acceleration market, directly challenging Nvidia’s long-standing dominance.