Comments on: AMD Teases Details On Future MI300 Hybrid Compute Engines https://www.nextplatform.com/2023/01/05/amd-teases-details-on-future-mi300-hybrid-compute-engines/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Mon, 19 Jun 2023 21:15:12 +0000 hourly 1 https://wordpress.org/?v=6.5.5 By: D.R. https://www.nextplatform.com/2023/01/05/amd-teases-details-on-future-mi300-hybrid-compute-engines/#comment-210170 Mon, 19 Jun 2023 21:15:12 +0000 https://www.nextplatform.com/?p=141697#comment-210170 More like its time for AMD to start developing their hw using carbon composites instead of silicon… you know, like graphene, synthetic diamonds and carbon nanotubes (they could also throw in adaptive/programmable metamaterials into the mix).

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/01/05/amd-teases-details-on-future-mi300-hybrid-compute-engines/#comment-203198 Mon, 09 Jan 2023 16:48:49 +0000 https://www.nextplatform.com/?p=141697#comment-203198 In reply to Hubert.

Could not agree more. I can’t wait to see if more memory (500 GB at a slower speed) is more important than shared memory (both the CPU and the GPU sharing the same HBM).

]]>
By: Hubert https://www.nextplatform.com/2023/01/05/amd-teases-details-on-future-mi300-hybrid-compute-engines/#comment-203193 Mon, 09 Jan 2023 11:29:40 +0000 https://www.nextplatform.com/?p=141697#comment-203193 In reply to Paul Berry.

I think it is intended for HPC-oriented workloads, replacing the (winning) combination of EPYC+MI250x, found in Frontier, with integrated MI300As (for El Capitan and others), with the added bonus of HBM for the CPU. Performance and efficiency should benefit, maybe to the level of 2 DP ExaFlop/s in a 30MW (or better) envelope on HPL, as well as top spot on HPCG. Watching how Grace+Hopper systems perform relative to MI300As (and vice-versa; eg. in top500) will be riveting I think.

]]>
By: Paul Berry https://www.nextplatform.com/2023/01/05/amd-teases-details-on-future-mi300-hybrid-compute-engines/#comment-203086 Fri, 06 Jan 2023 17:46:43 +0000 https://www.nextplatform.com/?p=141697#comment-203086 900 Watts per part! Is it time for evaporative cooling?

]]>
By: Paul Berry https://www.nextplatform.com/2023/01/05/amd-teases-details-on-future-mi300-hybrid-compute-engines/#comment-203085 Fri, 06 Jan 2023 17:45:15 +0000 https://www.nextplatform.com/?p=141697#comment-203085 They keep moving the pieces closer together, is that enough? So now the cpu and the gpu are on the same package, and share a memory controller. So there are no memory copies to move data between them, though there will likely be cache invalidates. However, they are distinct dies, so you’re still passing program control from processor to coprocessor. I would think it more efficient to take 64 GPU compute units, and attach them to the ryzen core. Can they be made an execution unit of the cpu? Obviously that’s a lot more invasive of a change, but FPUs did it 30 years ago. There’s a constant struggle between adding more vector/matrix compute capability to the GPU, and finding code segments with a long enough vector length to amortize the latency of handing the problem off to the coprocessor. The CPU keep getting better vector performance too, so it’s always hard to figure out which codes actually benefit from user a GPU. Mi300 sure is a heck of a lot of on-paper flops, but what percent of codes will make good use of them? How many will just want a fast cpu core?

]]>
By: HuMo https://www.nextplatform.com/2023/01/05/amd-teases-details-on-future-mi300-hybrid-compute-engines/#comment-203069 Fri, 06 Jan 2023 08:40:45 +0000 https://www.nextplatform.com/?p=141697#comment-203069 This looks to me to be as close to 100% perfect design as it gets.

]]>
By: Xpea https://www.nextplatform.com/2023/01/05/amd-teases-details-on-future-mi300-hybrid-compute-engines/#comment-203058 Fri, 06 Jan 2023 03:15:03 +0000 https://www.nextplatform.com/?p=141697#comment-203058 AMD is late. Nobody wants dedicated inference accelerators anymore. Customers want single AI accelerators that can do training and inference within same APIs/libraries/framework. IMHO this A70 will get small market adoption, unless it’s dirty cheap

]]>