AMD's flagship AI accelerator will receive a high-bandwidth boost when its MI325X arrives later this year.
AMD's focus seems to be extending its memory advantage over Nvidia. At launch, the 192GB MI300X boasted more than twice the HBM3 of the H100 and a 51GB edge over the upcoming H200. The MI325X boosts the accelerator's capacity to 288GB – more than twice that of the H200 and 50 percent more than Nvidia's Blackwell chips revealed at GTC this spring.
Except, in a prebriefing ahead of Computex, AMD execs boasted that its MI325X systems could support 1 trillion parameter models. So what gives? Well, AMD is still focusing on FP16, which requires twice as much memory per parameter as FP8.Despite hardware support for FP8 being a major selling point of the MI300X when it launched, AMD has generally focused on half-precision performance in its benchmarks. And amidwith Nvidia over the veracity of AMD's benchmarks late last year we learned why.
The one caveat is that Guadi3 doesn't support sparsity, while Nvidia and AMD's chips do. However, there's a reason AMD and Intel have both focused on dense floating point performance: sparsity just isn't that common in practice. But while AMD would prefer to draw comparisons to Nvidia's Hopper-gen parts, they aren't the ones it should be worried about. Of more concern are the Blackwell parts, which supposedly will start trickling onto the market later this year.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: pcgamer - 🏆 38. / 67 Read more »
Source: TheRegister - 🏆 67. / 61 Read more »
Source: TheRegister - 🏆 67. / 61 Read more »
Source: TheRegister - 🏆 67. / 61 Read more »
Source: TheRegister - 🏆 67. / 61 Read more »