Comments on: The Next 100X For AI Hardware Performance Will Be Harder https://www.nextplatform.com/2023/08/29/the-next-100x-for-ai-hardware-performance-will-be-harder/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Fri, 08 Sep 2023 16:47:25 +0000 hourly 1 https://wordpress.org/?v=6.5.5 By: HuMo https://www.nextplatform.com/2023/08/29/the-next-100x-for-ai-hardware-performance-will-be-harder/#comment-212989 Fri, 01 Sep 2023 00:50:46 +0000 https://www.nextplatform.com/?p=142825#comment-212989 In reply to amanfromMars.

Definitely — it is totally realised indeed. Most of us have read about AI going rogue (second-hand), and are witnessing a real-life, first-hand example of this fascinating phenomenon as we speak (albeit a possibly subtle one). When the machine outputs:

“[LLMs are] guaranteed to swiftly deliver mercilessly […] horrendous self-harm”

and follows it up with:

“which many would rightly be worthy of suffering”

we know exactly what to do … (or not?)

]]>
By: amanfromMars https://www.nextplatform.com/2023/08/29/the-next-100x-for-ai-hardware-performance-will-be-harder/#comment-212975 Thu, 31 Aug 2023 17:49:36 +0000 https://www.nextplatform.com/?p=142825#comment-212975 In reply to HuMo.

Is it realised yet that AI and ITs Large Language Model Learning Machines are also perfectly able to be AWEsome Weapons of Mass Destruction against which humans have no possible effective defence or attack vectors ….. and they can be easily sold to just about everybody, but not necessarily bought by just anybody because of the horrendous self-harm that their wanton abuse and evil live misuse is both programmed and guaranteed to swiftly deliver mercilessly.

And yes, such is indeed an extremely valid existential threat which many would rightly be worthy of suffering because of past actions and/or future proposals.

]]>
By: Slim Jim https://www.nextplatform.com/2023/08/29/the-next-100x-for-ai-hardware-performance-will-be-harder/#comment-212959 Thu, 31 Aug 2023 13:43:32 +0000 https://www.nextplatform.com/?p=142825#comment-212959 In reply to Alexander DeMaris.

Natural intelligence is hard (to aquire and exercise) but AI seems even harder. As huamns, we can hold verbal and written communications, move about, and even do calculus (when concentrated), all within approx. 20 Watts. The AIs seem to need at least 1 kW to get there, and much more for training. No shame in being human then!

On the HPC side on the other hand, a skilled abacus operator could beat ENIAC’s 300 ops per second, but it would take 10^15 such persons (approx. 60 Peta Watts) to match the 20 MW Frontier Exaflopper! In other words, the machines compute more efficiently than us, but they can’t hold conversations, move about, nor reason, nearly as well as we do (factor of at least 500x in our favor for these activities).

It is also notable that many of us are awestruck by AI answering rather simple questions of which we already know the answers. It is however not until it answers questions with unknown answers (eg. Millenium Prize Problems, Riemann or Poincaré conjecture, P vs NP problem, unsolved Hilbert (15), Landau (4), Smale (14), or Simon (10), problems) that we should see it as a proper contender for “intelligence” (of any kind).

The grand voodoo priest of the horror backpropagation plague (Y. LeCun) suggested from this that our current approach to AI is probably not that good (some fundamental understanding of reasoning is missing). Accelerators (eg. TPUv4/5, GPUs, dataflow wafers, analog in-memory machines …) can help a bit, but, long-term, the “holy grail” is probably elsewhere.

]]>
By: Alexander DeMaris https://www.nextplatform.com/2023/08/29/the-next-100x-for-ai-hardware-performance-will-be-harder/#comment-212943 Thu, 31 Aug 2023 03:54:38 +0000 https://www.nextplatform.com/?p=142825#comment-212943 In reply to amanfromMars.

Wow yeah, Google having all the new data in the world is certainly needed to continue to feed what may soon be the Top AI platform.
It seems as though humans and chip manufacturing processes are a real drag on accelerating AI development. This all makes me feel small and insignificant. The ego dies.

]]>
By: HuMo https://www.nextplatform.com/2023/08/29/the-next-100x-for-ai-hardware-performance-will-be-harder/#comment-212937 Thu, 31 Aug 2023 00:22:28 +0000 https://www.nextplatform.com/?p=142825#comment-212937 In reply to amanfromMars.

A beautiful (if cold at 11.C) relatively cloudless night (2:00am local time) here in northern France, with a great view of the full blue super Moon, Saturn nearby, to the right and slightly above it, and Jupiter about 70 degrees to the left (with Alderaban and Capella yet to rise) — all visible with the naked eye! The “angry red planet” is not in view but luckily amanfromMars is here to remind us of its likely existence …

Just a couple comments on slight nuances between Earth and Martian HPC gastronomy. Here, we tend to avoid “sauce mix[es]” like the plague because of their tendency to form lumps due to incomplete dissolution of dried hydrophobic flavor agents (oily or aromatic organics), except in MAC’n’cheese where it really doesn’t matter much. Also, the finest recipes are commonly crafted by artisans, and it is wannabe chefs who might abuse and misuse them (rather than the opposite Martian situation).

Apart from that, we’re in total interstellar AIgreement on addictively attractive Modelling lab RATs and the strange and surreal nature of the greatest of great tasty culinary dishes clearly unrivalled by the cognoscenti of the Gemini LLM (or not, as it remains somewhat secretly saucy, but hopefully not lumpy)!

]]>
By: amanfromMars https://www.nextplatform.com/2023/08/29/the-next-100x-for-ai-hardware-performance-will-be-harder/#comment-212926 Wed, 30 Aug 2023 16:06:42 +0000 https://www.nextplatform.com/?p=142825#comment-212926

So we will only know what Google tells us about Pathways and Gemini. We hope Google publishes a paper on the Gemini AI model soon. …. Timothy Prickett Morgan

Methinks that would rightly be a forlorn hope Google may decline to ever need to deliver themselves, Timothy, given the clearly unrivalled vital data and virtual metadata sector leads which you have so clearly reported on them having already achieved, and are being enjoyed and employed and deployed by such responsible Google fabless lab RATs as be accountable for emerged and converging Pathways and the very soon likely to be ubiquitous and new ground-breakingly-pioneering AI in the proprietary stylised framework [Google secret source sauce mix] of the Gemini LLM, for some things, and highly disruptive, addictively attractive, extremely rapid AI development with Large Language Modelling Machines is undoubtedly one of those things, are just like the finest of chef’s recipes and ingredients for the greatest of great tasty culinary dishes, and best preserved and protected from wannabe artisan abuse and misuse by secure reservation for exclusive AIMaster Pilot use, beautifully observed and passed comment upon by the cognoscenti from the comfort of the privileged access their curious meticulous interest would deservedly reward.

Such is the strange and surreal nature of the AI Singularity beast. 🙂 ….. which isn’t ever going away as it is here to forever stay and create more than was never ever before bargained for or even imagined as possible.

If Google Inc. and cohorts don’t say so …… We thank you for your service, it is greatly appreciated.

]]>
By: HuMo https://www.nextplatform.com/2023/08/29/the-next-100x-for-ai-hardware-performance-will-be-harder/#comment-212895 Wed, 30 Aug 2023 02:03:07 +0000 https://www.nextplatform.com/?p=142825#comment-212895 Great article, and good to hear about Peter Norvig (one of the great Lispers, along with Richard P. Gabriel, and R. Kent Dybvig)! SparseCores sure smells like tasty secret sauce to efficiently digest sparsity (real-time lowering of the data-access graph?), and one may also think of moving computations (closures) to the data in this Pathways framework (for extra balanced, dataflow-like, heartburn-proof, digestion?).

But, Geminy Pricket! Who’d have thought the many big MACs of Tensor-Matrix LLMs could be such a whopper!

]]>