Comments on: Top500 Supers: Nvidia Utterly Dominates Those Shiny New Machines https://www.nextplatform.com/2024/05/15/top500-supers-nvidia-utterly-dominates-those-shiny-new-machines/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Tue, 28 May 2024 02:34:41 +0000 hourly 1 https://wordpress.org/?v=6.5.5 By: Xpes https://www.nextplatform.com/2024/05/15/top500-supers-nvidia-utterly-dominates-those-shiny-new-machines/#comment-224525 Sun, 19 May 2024 02:22:36 +0000 https://www.nextplatform.com/?p=144156#comment-224525 You mean the 2 high-profile high-political machines that AMD gave up nearly for free (ie no profit) to put a foot in this market?
No need to be team green shareholder to see that Nvidia DC revenue will be 100 billion USD this year vs 5 billion for AMD. Thus the result in terms of market share is not surprising.
We all hope for more competition and for AMD-Intel-Cerebras-insert your favorite shiny AI accelerator startup here- to rise in this single color market. Innovation often comes from diversity and competition. It will be very boring for Timothy if he had to report on the same chocolate desert flavor in each article.
PS: don’t get me wrong, we all love chocolate cookies but time to time an apple pie or a strawberry ice cream are delicious too

]]>
By: Hubert https://www.nextplatform.com/2024/05/15/top500-supers-nvidia-utterly-dominates-those-shiny-new-machines/#comment-224490 Fri, 17 May 2024 10:49:08 +0000 https://www.nextplatform.com/?p=144156#comment-224490 In reply to Paul Berry.

These are great points. As a result, I went and read the following: https://www.hpcg-benchmark.org/custom/index.html%3Flid=158&slid=281.html which suggests that HPL and HPCG may be viewed as optimistic-pessimistic “bookends” on machine performance. More interestingly perhaps, for real-world (non-benchmark) multigrid conjugate-gradient iterative solution approaches (eg. for FD or FEM), one may be able to apply “latency hiding” strategies that up the actual performance (graph coloring oriented? GraphBLAS?). HPCG apparently deliberately avoids those in order to more fully depress HPC enthusiasts the world over, by one full log cycle (eh-he-eh!)!

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2024/05/15/top500-supers-nvidia-utterly-dominates-those-shiny-new-machines/#comment-224469 Fri, 17 May 2024 02:30:18 +0000 https://www.nextplatform.com/?p=144156#comment-224469 In reply to Barry.

Dominates the new machines on the list. No more, no less.

]]>
By: Barry https://www.nextplatform.com/2024/05/15/top500-supers-nvidia-utterly-dominates-those-shiny-new-machines/#comment-224455 Thu, 16 May 2024 21:49:01 +0000 https://www.nextplatform.com/?p=144156#comment-224455 Utterly dominates hpc? Guess that’s why LLNL went all AMD to another fastest supercomputer with all AMD, utterly dominates though wow. Somebody has some team green shares huh…

]]>
By: Slim Albert https://www.nextplatform.com/2024/05/15/top500-supers-nvidia-utterly-dominates-those-shiny-new-machines/#comment-224451 Thu, 16 May 2024 18:54:42 +0000 https://www.nextplatform.com/?p=144156#comment-224451 Nice categorization of the 49 new supers! I was slightly surprised that GH200 is ahead of MI300A in terms of “completeness of offerings” at this stage, but digging through paleontological TNP archives it seems that Grace-Hopper was announced in April 2021, and MI300A only a year later in June 2022 … and so it actually makes sense that the new-tech Alps and Venado are currently more ready than the new-tech El Capitan (Grace-Hopper: https://www.nextplatform.com/2021/04/12/nvidia-enters-the-arms-race-with-homegrown-grace-cpus/ , MI300A: https://www.nextplatform.com/2022/06/14/chip-roadmaps-unfold-crisscrossing-and-interconnecting-at-amd/ ).

I’m looking forward to the face-off between the grown-up JEDI (which should blossom into a thunderous greco-roman wild-card $DEITY) and the all-dressed, and tuned-up, El Capitan. Their computational efficiencies should somewhat converge by the next Top500, down from 88% for JEDI (to something more similar to Alps’ 76%, as more modules are combined), and up from 61% for El Capitan (to something above Frontier’s 70%, owing to a more tightly integrated arch, and tuning). Seeing how this will be a contest between jedi grasshoppers and epyc zen instincts, one can only expect the related computational kung-fu to be truly mind-blowing!

]]>
By: Paul Berry https://www.nextplatform.com/2024/05/15/top500-supers-nvidia-utterly-dominates-those-shiny-new-machines/#comment-224443 Thu, 16 May 2024 14:02:08 +0000 https://www.nextplatform.com/?p=144156#comment-224443 In reply to Timothy Prickett Morgan.

Supercomputers have always been so much better at the easy stuff than the hard stuff, and the easy stuff isn’t really very interesting. This has been true even back in the Cray Vector days when the ratio of bandwidth to compute was a thousand times better.
The good news, is that these things ARE getting better at the hard stuff, just not nearly as fast as the HLP flop number would have you believe. HBM really is a useful improvement over ddr ram. hundred gigabit networks really are better than gigabit networks, even on the hard problems. It’s just that hard problems have performance of teraflops, not exaflops, even on the best machines. But better than the megaflops they used to get.

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2024/05/15/top500-supers-nvidia-utterly-dominates-those-shiny-new-machines/#comment-224442 Thu, 16 May 2024 12:25:48 +0000 https://www.nextplatform.com/?p=144156#comment-224442 In reply to Eric Olson.

Not sure. But good points, and good questions. I find the HPCG test terrifying because it shows just how crappy these things really are at the hard stuff.

]]>
By: Eric Olson https://www.nextplatform.com/2024/05/15/top500-supers-nvidia-utterly-dominates-those-shiny-new-machines/#comment-224423 Thu, 16 May 2024 04:22:43 +0000 https://www.nextplatform.com/?p=144156#comment-224423 It’s interesting how efficient the Grace Hopper supercomputers appear.

The way I see it, HPL represents a stress test to verify the floating point, networking and power supplies are fit for purpose while HPCG is a better reflection of scientific workloads. With that in mind, HPL per watt as in the Green500 is interesting but science per watt is better reflected by HPCG per watt.

Since computers draw different amounts of power depending on whether they’re running HPL or HPCG, is there a way to infer HPCG per watt from the published data?

]]>