Comments on: Why AMD “Genoa” Epyc Server CPUs Take The Heavyweight Title https://www.nextplatform.com/2022/11/10/amd-genoa-epyc-server-cpus-take-the-heavyweight-title/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Sun, 23 Jul 2023 01:32:34 +0000 hourly 1 https://wordpress.org/?v=6.5.5 By: Timothy Prickett Morgan https://www.nextplatform.com/2022/11/10/amd-genoa-epyc-server-cpus-take-the-heavyweight-title/#comment-211515 Sun, 23 Jul 2023 01:32:34 +0000 https://www.nextplatform.com/?p=141488#comment-211515 In reply to Jigs Gaton.

Rackable Systems, which was eaten by SGI, which was eaten by HPE, was doing this like 20 years ago– the vertical cooling with one large fan. The physics of this is sound, as you point out.

]]>
By: Jigs Gaton https://www.nextplatform.com/2022/11/10/amd-genoa-epyc-server-cpus-take-the-heavyweight-title/#comment-211474 Sat, 22 Jul 2023 03:22:17 +0000 https://www.nextplatform.com/?p=141488#comment-211474 I wonder if the physical infrastructure of data centers is not the real problem instead of the cooling on the chips themselves. I saw a preso on this: https://tinygrad.org/#:~:text=kernels.%20Merge%20them!-,The%20tinybox,-738%20FP16%20TFLOPS and the idea with that one box is to spread out the components in vertical space, and then cool with a single slow moving large fan. That’s for one chip in a box you have to fit into your garage, like a battery wall or other-like structure. So to apply to a center, it would look more like huge vertical fins in rows perhaps arranged in a way that natural atmospheric winds and huge slow-rotating fans keep the fins cool. Want something even wilder? Take wind power farms and redesign them into dual-purpose self-perpetuating compute power stations, located along coasts and high wind plains. Ha!

]]>
By: Esben Barnkob https://www.nextplatform.com/2022/11/10/amd-genoa-epyc-server-cpus-take-the-heavyweight-title/#comment-201158 Wed, 23 Nov 2022 13:28:03 +0000 https://www.nextplatform.com/?p=141488#comment-201158 >>Lepak says that single socket servers will generally only have one DIMM per channel – there isn’t really room for two on these skinny machines – and even a lot of two socket machines will stick with one DIMM per channel to stop from having to interleave the memory and dilute the bandwidth to capacity ratios.

It’s the other way around, with single socket servers supporting 2DPC (24 DIMMs) and skinny dual socket servers only supporting 1DPC (24 DIMMs). Gigabyte is the exception providing 48 DIMM in a 2P Genoa platform, by using a creative CPU placement.

]]>
By: Paul Berry https://www.nextplatform.com/2022/11/10/amd-genoa-epyc-server-cpus-take-the-heavyweight-title/#comment-200787 Mon, 14 Nov 2022 21:50:10 +0000 https://www.nextplatform.com/?p=141488#comment-200787 When does it no longer make sense to pack more and more chiplets onto a single package? At what point does the exotic cooling technology no longer justify the single-package advantages? Should we just consider 4 (or more) socket servers again?

]]>