Comments on: Gelsinger: With Gaudi 3 and Xeon 6, AI Workloads Will Come Our Way https://www.nextplatform.com/2024/04/10/gelsinger-with-gaudi-3-and-xeon-6-ai-workloads-will-come-our-way/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Mon, 22 Apr 2024 16:30:07 +0000 hourly 1 https://wordpress.org/?v=6.5.5 By: Michael Alan Bruzzone https://www.nextplatform.com/2024/04/10/gelsinger-with-gaudi-3-and-xeon-6-ai-workloads-will-come-our-way/#comment-223044 Thu, 11 Apr 2024 17:27:37 +0000 https://www.nextplatform.com/?p=143965#comment-223044 Gaudi 3 price?

Xeon Phi 7120 P/X = $4125
Xeon Phi 5110 D/P = $2725
Xeon Phi 3120 A/P = $1625

Average Weighed Price of the three on 2,203,062 units of production = $2779

Stampede TACC card sample = $400 what a deal
Shanghai Jiatong University sample = $400 (now export restricted?)

Gaudi System on substrate if $16,147 approximately Nvidia x00 and AMD x00 gross per unit the key component cost is $3608, and if $11,516 on Nvidia accelerator ‘net’ take component cost drops to $2573. So Intel will sell them for x4 that AMD and Nvidia won’t be able to reach IF Intel does its own manufacturing.

Mike Bruzzone, Camp Marketing

]]>
By: Michael Alan Bruzzone https://www.nextplatform.com/2024/04/10/gelsinger-with-gaudi-3-and-xeon-6-ai-workloads-will-come-our-way/#comment-223043 Thu, 11 Apr 2024 17:25:31 +0000 https://www.nextplatform.com/?p=143965#comment-223043 Gaudi 3 price?

Xeon Phi 7120 P/X = $4125
Xeon Phi 5110 D/P = $2725
Xeon Phi 3120 A/P = $1625

Stampede TACC card sample = $400 what a deal
Shanghai Jiatong University sample = $400 (now export restricted?)

Gaudi System on substrate if $16,147 approximately Nvidia x00 and AMD x00 gross per unit the key component cost is $3608, and if $11,516 on Nvidia accelerator ‘net’ take component cost drops to $2573. So Intel will sell them for x4 that AMD and Nvidia won’t be able to reach IF Intel does its own manufacturing.

Mike Bruzzone, Camp Marketing

Average Weighed Price of the three on 2,203,062 units of production = $2779

]]>
By: Calamity Jim https://www.nextplatform.com/2024/04/10/gelsinger-with-gaudi-3-and-xeon-6-ai-workloads-will-come-our-way/#comment-223027 Thu, 11 Apr 2024 10:44:22 +0000 https://www.nextplatform.com/?p=143965#comment-223027 Gelsinger’s narrative focus on “Open Ecosystem” is quite interesting IMHO, and if Altera had been kept in-house maybe they could have developed an open-bitstream format (say OBF, or UBF) with which to upload gateware into FPGAs from arbitrary vendors that use it as a standard (something to possibly suggest to Sandra Rivera now) … and I can’t wait to see how the Xeon 6 Granite Rapids performs in the ring (preliminary benchmarks on “suplex” would be great!)!

One thing that may have been missing from the Intel Vision Arizona event though, and that may be something for Gelsinger to clarify in the May 14-15 EMEA event in London, as hinted to by Tim on Tuesday (“provided that customers trust there will be enough architectural similarities between Gaudi 3 and […] Falcon Shores”), is how easy (or hard) one may expect the transitions from Gaudi 3 (AI/ML) to Falcon Shores, and from Ponte Vecchio (HPC) to Falcon Shores, to be. Binary compatibility would be great, but probably too much to hope for. I guess the focus on higher-level frameworks (PyTorch, Hugging Face) is currently the proposed approach for dealing with this in the AI/ML space(?).

Also, it’s amazing that training AI models (eg. 1.8 trillion-parameter, cited in the article) requires as much power as running Frontier, or Sierra (#1 and #10 on Top500)! Gotta hope that the results are worth the expense …

]]>
By: Eric Olson https://www.nextplatform.com/2024/04/10/gelsinger-with-gaudi-3-and-xeon-6-ai-workloads-will-come-our-way/#comment-223015 Thu, 11 Apr 2024 06:52:50 +0000 https://www.nextplatform.com/?p=143965#comment-223015 According to your article Pat said “Industry is quickly moving away from proprietary [Nvidia] CUDA models.”

From what I can tell CUDA 12.1 is the default backend for PyTorch with ROCm 5.7, CUDA 11.8 and CPU as other options. Since PuTorch is commonly used with CUDA, is the main point that it’s nonproprietary?

]]>