Specialist cloud operators skilled at running hot and power-hungry GPUs and other AI infrastructure are emerging, and while some of these players like CoreWeave, Lambda, or Voltage Park — have built their clusters using tens of thousands of Nvidia GPUs, others are turning to AMD instead.
By the end of 2024, TensorWave aims to have 20,000 MI300X accelerators deployed across two facilities, and plans to bring additional liquid-cooled systems online next year. In addition to higher floating point performance, the chip also boasts a larger 192GB of HBM3 memory capable of delivering 5.3TB/s of bandwidth versus the 80GB and 3.35TB/s claimed by the H100.
This cooling tech has become a hot commodity among datacenter operators looking to support denser GPU clusters and led to some supply chain challenges, TensorWave COO Piotr Tomasik said.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: pcgamer - 🏆 38. / 67 Read more »
Source: TheRegister - 🏆 67. / 61 Read more »
Source: TheRegister - 🏆 67. / 61 Read more »
Source: TheRegister - 🏆 67. / 61 Read more »