
Bechtolsheim Outlines Scaling XPU Performance By 100X By 2028
Effects are multiplicative, not additive, when it comes to increasing compute engine performance. …
Effects are multiplicative, not additive, when it comes to increasing compute engine performance. …
When you are designing applications that run across the scale of an entire datacenter and that are comprised of hundreds to thousands of microservices running on countless individual servers and that have to be called within a matter of microseconds to give the illusion of a monolithic application, building fully connected, high bi-section bandwidth Clos networks is a must. …
Nvidia hit a rare patch of bad news earlier this month when reports started circulating claiming that the company’s much-anticipated “Blackwell” GPU accelerators could be delayed by as much as three months due to design flaws. …
Current AI training and some AI inference clusters have two networks. …
Like many other suppliers of hardware and systems software, Cisco Systems is trying to figure out how to make money on the AI revolution. …
SPONSORED POST: The rapid breakout of Artificial Intelligence is driving business opportunities across verticals – but there’s one sector for which AI presents some formidable challenges, and that’s the datacenter industry itself. …
UPDATED: Nvidia is a member of the Ultra Ethernet Consortium.
The jury is still out on a lot of things about this exploding AI market and the re-convergence that it will have with traditional HPC systems for running simulations and models. …
A rising tide may lift all boats, and that is a good thing these days with any company that has an AI oar in the water. …
Richard Solomon has heard the rumblings over the years. As vice president of PCI-SIG, the organization that controls the development of the PCI-Express specification, he has listened to questions about how long it takes the group to bring the latest spec to the industry. …
In those heady months following OpenAI’s launch of ChatGPT in November 2022, much of the IT industry’s focus was on huge and expensive cloud infrastructures running on powerful GPU clusters to train the large language models that underpin the chatbots and other generative AI workloads. …
All Content Copyright The Next Platform