News

NVIDIA’s New AI-Powered Avatars Are Pretty Incredible

NVIDIA’s CEO wants everyone to have a digital twin.

During this weeks NVIDIA GTC Conference keynote speech, company CEO Jenson Huang introduced 65 new and updated SDKs that will impact everything from self-driving vehicles and cybersecurity to cloud computing, robotics, and interactive conversational AI avatars.

First up is the NVIDIA Omniverse Avatar, a platform that can be used to create real-time conversational AI avatars that are able to see, speak, converse on a wide range of subjects and understand your intent as you speak to them. NVIDIA sees their Omniverse Avatar platform assisting a variety of industries such as food, banking, and retail just to name a few. 

“The dawn of intelligent virtual assistants has arrived,” said Huang, adding, “Omniverse Avatar combines NVIDIA’s foundational graphics, simulation and AI technologies to make some of the most complex real-time applications ever created. The use cases of collaborative robots and virtual assistants are incredible and far reaching.”

The company also took time during the conference to unveil its NVIDIA Omniverse Replicator, a powerful synthetic-data-generation engine that has the ability to produce physically simulated virtual worlds perfect for training pruposes. This technology, for example, could be used by major companies to safely train employees working in potentially dangerous conditions.

In an official press release,  Rev Lebaredian, Vice President of simulation technology and Omniverse Engineering at NVIDIA said, “Omniverse Replicator allows us to create diverse, massive, accurate datasets to build high-quality, high-performing and safe datasets, which is essential for AI. While we have built two domain-specific data-generation engines ourselves, we can imagine many companies building their own with Omniverse Replicator.”

Image Credit: NVIDIA

Part of Huang’s keynote focused on how these new technologies will play a role in transforming multi-billion dollar industries around the globe through Project Maxine, a GPU-accelerated SDK with state-of-the-art AI features that developers can use to build life-like video and audio effects, along with powerful AR experiences that can be used for work, entertainment, education, and social situations.

He also showed how Maxine could easily be integrated into computer vision technology—such as NVIDIA’s Riva Speech—and AI to create a real-time multi-language conversational avatars. To help drive his point home, Huang showcased how different divisions of NVIDIA were using the technology to build out experiences.

NVIDIA’s Metropolis team, for example, used Maxine to create a talking kiosk named Tokkio that greets you in any language and helps with your food order. NVIDIA’s Drive engineers created Concierge, an AI that would be used in self-driving vehicles. They also used Maxine to help create a virtual toy version of Haung that uses speech synthesis and AI to engage in “real” converstation. In the video provided below, we see an actual human being chatting with Huang’s AI-powered avatar in real-time. The pair talk about everything from climate change and space exploration to proteins. Again, this isn’t Huang talking, It’s a virtual avatar powered by artificial intelligence.

Not only did NVIDIA’s CEO talk about their new tools, he also talked about the importance of having a digital twin, and how virtual worlds will be important for brands in all industries as we grow closer to a legitamet metaverse.

To view Haung’s keynote, click here. NVIDIA’s GTC Conference began on Monday and runs until November 11th.

Feature Image Credit: NVIDIA

About the Scout

Bobby Carlton

Hello, my name is Bobby Carlton. When I'm not exploring the world of immersive technology, I'm writing rock songs about lost love. I'd also like to mention that I can do 25 push-ups in a row.

Send this to a friend