Comments on: What Happens When LLMs Design AI Accelerators? https://www.nextplatform.com/2023/09/25/what-happens-when-llms-design-ai-accelerators/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Sun, 01 Oct 2023 01:18:51 +0000 hourly 1 https://wordpress.org/?v=6.5.5 By: HuMo https://www.nextplatform.com/2023/09/25/what-happens-when-llms-design-ai-accelerators/#comment-214411 Sun, 01 Oct 2023 01:18:51 +0000 https://www.nextplatform.com/?p=143020#comment-214411 In reply to Hubert.

Point taken! But do think of the astonishing potential for savings in healthcare costs if this remarkably capable AI tech could enable your non-expert nosy neighbors to successful prompt-engineer an only slightly knobby 3-D printed neurosurgery of intricate nature for your invasive in-laws … an opportunity not to be missed! A most tantalizing prospect! 8^b

]]>
By: Hubert https://www.nextplatform.com/2023/09/25/what-happens-when-llms-design-ai-accelerators/#comment-214152 Tue, 26 Sep 2023 17:02:54 +0000 https://www.nextplatform.com/?p=143020#comment-214152 Thanks for bringing this Georgia Tech paper to the TNP readership! I’m not 100% on why one would want non-expert folks who are not well-versed in hardware to design AI accelerators using imprecise human languages parsed stochastically by LLMs. Would we want this in other demanding fields such as dentistry, neurosurgery, or rocket science? Or would we prefer for highly-trained experts to be in charge there, assisted as needed by the most precise and accurate tech?

I find it unfortunate that the GTech authors chose to refer to LLMs with locutions that are more typical of sales-pitches than of scientific and engineering investigative discourse. This gives the reader an impression that the study has been carried out in a biased, naive, or uncritical manner, shadowing the potential seriousness and adequacy of the methods, and diluting the potential impacts of the results. In this reviewer’s opinion, expressions such as “remarkable capabilities”, “intricate nature”, “amazing capacity”, “tantalizing prospect”, “astonishing potential”, and others (eg. “vital insights”), should be appropriately sedated to better match the target audience of intelligent professionals.

The paper should also clearly state the conclusion that results from its investigation, namely that LLMs cannot be used by non-experts to design AI accelerators, but, in the future, they may possibly be used by experts to assist in various aspects of accelerator design. Expertise is needed to “expertly” split the design into consistent functional subsystems that an LLM may then perform additional work on. Expertise is also needed to prompt engineer the LLM to aid it perform its targeted support role. Expertise is further needed to develop and provide the HLS examples required for model training. At present, the authors’ proposed pipeline for LLM-assisted EDA effectively inverts the assistance relationship by requiring experts to assist the LLM, in the eventual hope that LLMs would later assist non-experts do a job that is likely not suited to them.

]]>
By: Nicole Hemsoth Prickett https://www.nextplatform.com/2023/09/25/what-happens-when-llms-design-ai-accelerators/#comment-214144 Tue, 26 Sep 2023 13:36:25 +0000 https://www.nextplatform.com/?p=143020#comment-214144 In reply to HuMo.

I could not force myself to type GPT4AIGChip. I tried.

]]>
By: HuMo https://www.nextplatform.com/2023/09/25/what-happens-when-llms-design-ai-accelerators/#comment-214115 Mon, 25 Sep 2023 23:39:42 +0000 https://www.nextplatform.com/?p=143020#comment-214115 Well, you know what I’m going to say here … but I’ll say it anyways, unanimously (^q8). Like an ugly child, AI won’t live up to my expectations of it (esp. Super-Intelligence) until it can autonomously (zero-shot) design, prototype, verify, and mass produce a superluminal Alcubierre metric tensor warp drive, relying on Casimir vacuum dark energy, that disproves the Chronology Protection Conjecture (CPC), and so-challenges the related 3-letter censorship Agency (CPA; from S. Hawking!)!

When this happens then, yes, I’ll agree that we haven’t lost our collective m&m marbles over ML, and that AI actually has more smarties in its bag than we peoples have fig Newtons! “Time will tell” as concluded in this TNP article, particularly because, as future AI figured out time-travel for us, I’ve been able to come back here with great precision and edit this comment to accurately reflect the related consequences (or not? ahem!). Note also that time-travel is a known contributing factor for dyslexia … or orthograph-ic wave dispersion (8p^) … exircese cuati!on

I might be the complete cucurbita party-pooping curmudgeon of the gourd family at this temporal juncture, but my opinion will surely change, in the past, with future advances in AI-mediated time travel! (or not?) ^8p 8d^ 8^p

P.S. GIT study looks interesting, except maybe the unpronounceable 11-letter klingon mixed-acronym mouthful: GPT4AIGChip.

]]>