What we learned about the future of AI Chips by keeping track of
breaks MLPerf benchmark records Let’s start with the news. Yesterday, . MLPerf is the de facto standard in AI workload benchmarks, and as more AI workloads emerge . With Generative AI taking off over the last year, MLPerf has added Gen AI workloads to its arsenal. H100 Tensor Core GPUs notes, the company was the only one to run all MLPerf tests, demonstrating the fastest performance and the greatest scaling in each of the nine benchmarks. In MLPerf HPC, a separate benchmark for AI-assisted simulations on supercomputers, H100 GPUs delivered up to twice the performance of
GPUs achieve, seems to be still in effect. But perhaps the real question is who should care, and why. Jensen’s Law That kind of scale is not something anyone but the hyperscalers could normally handle, even if they wanted to. competitors may catch up Either way, the long view here is that scaling up the way
also leverages a set of business tactics with regards to supply chain management, sales strategies and bundling which few others are able to replicate. But that does not mean that the competition is idling either. noted by analyst Dylan Patel As far as supercomputers and scaling up go, ’s CUDA and performed best on
isn’t sitting on their hands. While ’s tactics, which is something we don’t have an opinion on. What we can say is that even though
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: hackernoon - 🏆 532. / 51 Read more »