CIOs with major AI ambitions will see value in the GPU maker’s new offerings: The new Blackwell hardware architecture and a software package to optimize inference for several popular AI models.
GPU powerhouse Nvidia has bet its future on AI, and a handful of recent announcements focus on pushing the technology’s capabilities forward while making it available to more organizations.
During its GPU Technology Conference in mid-March, Nvidia previewed Blackwell, a powerful new GPU designed to run real-time generative AI on trillion-parameter large language models (LLMs), and Nvidia Inference Microservices (NIM), a software package to optimize inference for dozens of popular AI models.
Nvidia Founder and CEO Jensen Huang, pictured above, also made several other announcements, including a new humanoid robot project, during his two-hour keynote speech.
For CIOs deploying a simple AI chatbot or an AI that provides summaries of Zoom meetings, for example, Blackwell and NIM may not be groundbreaking developments, because lower powered GPUs, as well as CPUs, are already available to run small AI workloads. However, CIOs looking for computing power needed to train AIs for specific uses, or to run huge AI projects will likely see value in the Blackwell project.
And if the Blackwell specs on paper hold up in reality, the new GPU gives Nvidia AI-focused performance that its competitors can’t match, says Alvin Nguyen, a senior analyst of enterprise architecture at Forrester Research.
“They basically have a comprehensive solution from the chip all the way to data centers at this point,” he says.
Unmatched power for AI
For CIOs with AI aspirations, the Blackwell announcement will signal the ability to experiment with super chips or dedicated servers, Nguyen adds. Blackwell will allow enterprises with major AI needs to deploy so-called superpods, another name for AI supercomputers. Blackwell will also allow enterprises with very deep pockets to set up AI factories, made up of integrated compute resources, storage, networking, workstations, software, and other pieces.
The case for Blackwell is clear, adds Shane Rau, research VP for semiconductors at IDC. As AI models get larger, they’ll require more performance for training and inferencing, the process that a trained AI uses to draw conclusions from new data, he says.
As LLM AIs trained in 2023 are deployed, “CIOs will learn what works and what doesn’t, and so a retrain and redeployment cycle will begin,” Rau says. “Thus, the need for Blackwell should be strong.”
If organizations aren’t training their own LLMs, the AI case for Blackwell is highly dependent on their industry verticals and internal workflows, Rau adds. “The more application-specific the workload they have and fewer resources they can bring to bear, the longer they’ll have to wait for AI solution stack and AI model standardization,” he says.
NIM, Nvidia’s software package to optimize inference for several AI models, should also gain traction in the market, because many companies won’t be able to train AIs for their purposes, Rau says.
“Not everyone has the resources to train and deploy AI models at scale, nor do folks want to buy general models when all they need is a model specific to their identified workloads,” he says. “So, pre-trained models and run-time models made off-the-shelf for IT folks to buy and maybe tune a little bit, will be necessary for AI to scale across enterprises and across the internet.”
Beyond AI, the Blackwell GPU will have uses for CIOs with other heavy computational needs, says Javier Muniz, CTO at LCC Attorney, a legal website.
“GPUs like Blackwell could revolutionize the fields of data analytics, 3D modeling, cryptography, and even advanced web rendering — areas where processing speed and power are crucial,” he says. “In terms of benefits for CIOs, many have vast data sets that need to be processed and analyzed. GPUs can drastically reduce the time taken for these computations.”
Does AI scale?
Nvidia’s Huang touted the benefits of GPU-powered accelerated computing, for AI and other purposes, during his keynote, saying that general-purpose computing has “run out of steam.” But Nvidia’s many announcements during the conference didn’t address a handful of ongoing challenges on the hardware side of AI.
The GPU market is still recovering from supply shortages driven by several factors, including high demand from cryptocurrency miners and AI projects. Nvidia’s lead in the high-end AI GPU market should concern companies focused on AI projects, adds Subutai Ahmad, CEO of Numenta, a provider of a CPU-based AI scaling platform that potentially competes with Nvidia GPUs.
“The dominance of Nvidia combined with the shortages of GPUs and GPU parts mean that CIOs must look for alternatives,” Ahmad says. “They can’t be single sourced and then left high and dry on their AI initiatives.”
Meanwhile, Huang, in his keynote, talked about the need to scale computing power, to drive down the cost of computing while being sustainable. But AI still has a scaling problem, says Forrester’s Nguyen. So far, the costs and power needs of AI don’t incrementally diminish as enterprises add users or workloads.
Companies can now throw more GPUs at an AI workload, but that’s not viable over the long run, Nguyen says.
“The problem is that once you get that first 100 users and you add another 100, is it getting cheaper, and the requirements getting smaller?” he asks. “The answer is, not yet.”
While Nvidia and other hardware providers highlight their growing capabilities, only hyperscalers are now able to afford AI factories and the highest performing LLMs, Nguyen adds. “You can have effective basic performance, but you still have that long-term scalability issue,” he says.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : CIO – https://www.cio.com/article/2075461/nvidia-points-to-the-future-of-ai-hardware.html