Home / Technology / Nvidia spans AI from $59 Jetson Nano robots to massive datacenters on a chip

Nvidia spans AI from $59 Jetson Nano robots to massive datacenters on a chip

Nvidia CEO Jensen Huang delivered the GPU Technology Conference (GTC) keynote address this morning to spotlight the company’s platforms that span all of computing, from a $ 59 Jetson Nano robot brain to massive data processing units that are part of its datacenter-on-a-chip strategy.

Nvidia’s big digital event is expected to draw 30,000 attendees for more than 1,000 sessions timed to accommodate a worldwide audience. The event is also coinciding with the digital Arm DevSummit, which starts tomorrow with a keynote chat between Huang and Arm CEO Simon Segars. (Nvidia has recently agreed to buy Arm for $ 40 billion.) The whole effort is aimed at winning over the hearts and minds of more than 15 million developers worldwide.

Huang said Nvidia has shipped more than a billion graphics processing units (GPUs) to date and its CUDA software development kit has had 6 million downloads in 2020. He also said Nvidia has 80 news SDKs available today and 1,800 GPU-accelerated applications. And he called the company’s new Ampere GPUs the “fastest ramp in our history.”

DPUs and DOCAs

COVID-19 simulation in Omniverse.

Above: COVID-19 simulation in Omniverse.

Image Credit: Nvidia

Among the event’s announcements is a new chip aimed at making it easier to run cloud-based datacenters. Head of enterprise computing Manuvir Das introduced the chip, called the Nvidia BlueField-2, in a press briefing. Nvidia described it as a data processing unit (DPU) akin to the GPU that gave the company its start in computing.

The DPUs are a new kind of chip that combine Nvidia’s chip technology with the networking, security, and storage technology the company gained with its $ 7.5 billion acquisition of Mellanox in 2019. The Nvidia BlueField-2 provides accelerated datacenter infrastructure services in which central processing units (CPUs), GPUs, and DPUs work together to deliver a computing unit that is AI-enabled, programmable, and secure, Das said.

“This is really about the future of enterprise computing, how we see servers and datacenters being built going forward for all workloads, not just AI-accelerated workloads,” Das said. “We really saw a move toward software-defined datacenters, where more of the infrastructure that was previously built as fixed-function hardware devices has been converted into software that deploys on every application server.”

Traditional servers have separate CPUs and acceleration engines for various tasks. But with the DPU-accelerated server, Nvidia will combine them into a more seamless group of services the company refers to as a datacenter-on-a-chip. The BlueField-2 DPU has eight 64-bit A72 Arm cores and a bunch of other components for accelerating security, networking, and storage processing.

Above: The Nvidia BlueField-2 DPU.

Image Credit: Nvidia

The BlueField-2X adds an Nvidia Ampere GPU and could be used for things like anomaly detection and automated responses, real-time traffic analysis that doesn’t slow that traffic, malicious activity identification, dynamic security, and online analytics of uploaded videos.

“Nvidia is now introducing a new concept that we refer to as the data processing unit, or the DPU, which goes along with the CPU and the GPU,” Das said. “This lets us have the best of breed servers going forward. It’s really taking all of that software-defined infrastructure and putting it on a chip that is in the same server. We believe that the DPU belongs in every server going forward, regardless of the application workload running there.”

Nvidia said a single BlueField-2 DPU can deliver the same datacenter services that might consume up to 125 CPU cores. This frees up valuable CPU cores to run a wide range of other enterprise applications. The BlueField-2 can handle 0.7 trillion operations per second (TOPS), while the BlueField-2X with its Ampere GPU can do 60 TOPS. By 2022, Nvidia estimates BlueField-3X will hit 75 TOPS. And by 2023 BlueField-4 is expected to hit 400 TOPS, or 600 times more than the BlueField-2.

Server manufacturers that are adopting the DPUs include Asus, Atos, Dell Technologies, Fujitsu, Gigabyte, H3C, Inspur, Lenovo, Quanta/QCT, and Supermicro. Software partners include VMware, Red Hat, Canonical, and Check Point Software Technologies.

EGX AI platform

Above: Nvidia EGX AI Fleet Command.

Image Credit: Nvidia

Nvidia said its EGX AI platform — which will use the BlueField-2 DPU and the Nvidia Ampere GPU on a single computing card — is getting a refresh. The platform has already seen widespread adoption by tech companies for use in enterprises and edge datacenters.

The EGX AI platform will be the new building blocks of accelerated datacenters. Systems based on the Nvidia EGX AI platform are available from server manufacturers — including Dell Technologies, Inspur, Lenovo, and Supermicro — with support from software infrastructure providers such as Canonical, Cloudera, Red Hat, Suse, and VMware, as well as hundreds of startups.

“AI in the past couple of years has moved from being exclusively in the cloud now to the edge,” said edge computing VP Deepu Talla in a press event. “There’s an enormous amount of processing that needs to be done at the point of action. We have to bring datacenter capabilities to the point of action, and Nvidia EGX AI is the answer.”

Talla said manufacturing, health care, retail, logistics, agriculture, telco, public safety, and broadcast media will benefit from the EGX AI platform, as it makes it possible for organizations of all sizes to quickly and efficiently deploy AI at scale.

Rather than having 10,000 servers in one location, Nvidia believes future enterprise datacenters will have one or more servers across 10,000 different locations, including inside office buildings, factories, warehouses, cell towers, schools, stores, and banks. These edge datacenters will help support the internet of things (IoT).

To simplify and secure the deployment and management of AI applications and models on these servers at scale, Nvidia announced an early access program for a new service called Nvidia Fleet Command. This hybrid cloud platform combines the security and real-time processing capabilities of edge computing with the remote management and ease of software-as-a-service.

Among the first companies provided early access to Fleet Command is Kion Group, a supply chain company using the tech in its retail distribution centers. Northwestern Memorial Hospital in Illinois is also using Fleet Command for its IoT sensor platform.

Nvidia Jetson Nano mini AI computer

Above: Nvidia Jetson Nano 2GB is $ 59.

Image Credit: Nvidia

Nvidia also showed the latest version of its robot platform, the Nvidia Jetson AI at the Edge platform. It starts for as little as $ 59 and is targeted at students, educators, and robotics hobbyists.

The Jetson Nano 2GB Developer Kit also costs $ 59 and comes with free online training and certification. It is designed for teaching and learning AI through hands-on projects in such areas as robotics and intelligent internet of things. It will be available at the end of the month through Nvidia’s distribution channels.

In March 2019, Nvidia announced a $ 99 version of the kit, using earlier chips. More than 700,000 developers are now using that kit, Talla said, calling it “the ultimate robotics and AI starter kit.”

Nvidia RTX A6000 and Nvidia A40

Above: Nvidia Quadro RTX A6000

Image Credit: Nvidia

Nvidia also announced that its Ampere-based Nvidia RTX A6000 workstation chips will replace the Turing-based version of the Quadro family. And the company has a new Nvidia A40 that is the passive-cooled version of the same chip. Both GPUs will be widely available in early 2021, and the RTX A6000 will also be available from channel partners in mid-December.

Professional visualization VP Bob Petty said in a press briefing that the workstations will enable professionals to get more work done by creating immersive projects with photorealistic images and videos — whether those wind up in movies, games, or virtual reality experiences. The workstations can be used at engineers’ homes, or professionals can log into datacenters from their homes and use them over the cloud.

“With the pandemic and economic uncertainty, it really drives up the need for even more efficiency in how professionals work,” Petty said. “They need even more automation in what they are doing, and they need to spend less time getting products to market faster.”

Architecture firm Kohn Pederson Fox Associates has been using the Nvidia RTX A6000 to triple resolution and accelerate real-time visualization for its complex building models. Special effects company Digital Domain has been using the real-time ray tracing and machine learning to create digital humans for films. And Groupe Renault is using the chips to design cars.

Lastly, Nvidia professional virtual reality director David Weinstein said the company will enable CloudXR on Amazon Web Services, where virtual reality (VR), augmented reality (AR), and mixed reality (XR) experiences can be streamed to VR headsets via the cloud. This means professionals can engage with cloud-based immersive XR experiences and won’t be tethered to an expensive workstation.

Weinstein noted that more than 10 million VR headsets have been sold worldwide, and he said the acceleration in recent months has been dramatic. Car dealerships are just one example of an application for this technology, with a dealer having one real car on hand for people to check out and many versions available for viewing on VR headsets. Data streamed from the cloud could allow people to see what the car would look like with different options, Weinstein said. Nvidia has CloudXR partners in companies such as electric car maker Lucid Motors, the Gettys Group, and Theia Interactive. With the CloudXR SDK and Nvidia Quadro Virtual Workstation software, partners can engage in remote XR. CloudXR on AWS will be available early next year, with a private beta coming within months.

“You can stream the same rich graphics from a datacenter down the hall or across a campus or even from the cloud,” Weinstein said.

Let’s block ads! (Why?)

VentureBeat

About

Check Also

SAG-AFTRA hits out at AI Taylor Swift deepfakes and George Carlin special, calls to make nonconsensual ‘fake images’ illegal

The Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) put out …