, NVIDIA Continues To Evolve, From Chips To Software To AI Data Centers, The Nzuchi News Forbes

NVIDIA Continues To Evolve, From Chips To Software To AI Data Centers

Company’s Computex announcements also expand NVIDIA Certified program to DPU and Arm

NVIDIA made several announcements this week in Taiwan, launching new software and services to help enterprises enter the age of AI. NVIDIA’s Manuvir Das, head of Enterprise Computing, realizes that many enterprises have struggled to adopt AI, and believes a comprehensive approach to software and hardware, supported by a certified partner ecosystems, will open new doors and applications. While increasing demand for AI adoption will ultimately rely on having projects with a solid ROI, NVIDIA has spent considerable effort and money to remove many technology barriers to entry. Now it will be up to NVIDIA’s partners such as Dell, HPE, Lenovo and SuperMicro to carry the torch that can spark the AI revolution in the enterprise.

What did NVIDIA announce?

Before we dive deep into the new software platforms and ecosystem enhancements, we note that NVIDIA disclosed some traction data for their own server, the DGX. Nvidia has historically been a bit shy about touting DGX deployments, refusing to disclose sales volumes and rarely naming deployments outside their own Saturn V and Selene Supercomputers. This reticence is understandable: the company’s system OEM partners are the primary channel to reach the enterprise market. However, the company just shared this AI building block has been adopted by, for example, 10 of 10 top Aerospace companies, 6 of 10 top US banks, and 8 out of 10 top global telcos. And the core technologies inside DGX have been adopted as HGX subsystems and networking DPUs by cloud service providers and system OEM’s.

Now, on to the real star of the Computex show: software. NVIDIA engineers have used an in-house AI development and Ops management platform for a number of years. Customers asked for access to these tools, and NVIDIA has now announced it will be available later this year as NVIDIA Base Command. This MLOps suite of tools and dashboards help enterprise adopters avoid the painful selection and integration of various point solutions. When combined with the AI Enterprise development environment and the NGC repository, Base Command keeps pushing NVIDIA’s quest to become far more than a chip company.

The Base Command software can be used on-prem or in public clouds to support large-scale AI development workflows. By enabling numerous researchers and data scientists to work collaboratively on accelerated computing infrastructure, Base Command can help enterprises improve user productivity and AI infrastructure efficiency.

Interestingly, Base Command is initially available only through a premium monthly subscription running on hosted SuperPod infrastructure. The minimum configuration is 4 DGX servers and costs $90,000 a month. To put that into perspective, a single DGX costs from $200,000 to $400K. So, while this service is intended to help Enterprises begin their journey at a lower cost and effort, it is still a sizable investment. Amazon AWS and Google Cloud intend to support the platform later this year as a service. NVIDIA envisions early customers will first get comfortable with Base Command on the SuperPod and then migrate to on-premises or public cloud infrastructure. It will be interesting to see how much traction NVIDIA is able to generate for this new offering.

MORE FOR YOU

Base Command adds to an already impressive suite of software for AI and HPC from NVIDIA, including AI Enterprise (which will be generally available this August) and the collaboration platform called Omniverse Enterprise. Both are built on VMware vSphere, easing enterprise infrastructure management. And the NVIDIA GPU Cloud continues to grow, with hundreds of fully integrated, optimized and tested application containers.

Finally, NVIDIA announced expansions of its NVIDIA Certified program, adding support for the BlueField DPU data center control SmartNIC and future Arm CPUs for accelerated computing. The company declined to name the Arm CPU partners, given the long time-frame, but we suspect Ampere to be among the first.

Conclusions

These announcements form a significant milestone for NVIDIA, adding more software for enterprise AI and HPC adoption, while expanding the ecosystem to support DPUs and Arm CPUs in the future. We suspect NVIDIA will put their own Grace Arm-based CPU platform into the mix in 2023/4, not available as merchant silicon but as integrated motherboards with CPUs, DPU’s and GPUs for OEM sales support.

NVIDIA is quickly transitioning from a chip company to full accelerated computing infrastructure company, or as Jensen would put it, a data center provider, which should drive incremental high-margin revenue.

More Stories
The Live-Action Adaptation Of ‘Cowboy Bebop’ Brings Yoko Kanno Onboard And Finally Shows Off Its Cast