nvidia omniverse

NVIDIA’s big news: Cloud XR goes to AWS, Omniverse open beta is starting, and more!

Today at GTC 2020, NVIDIA has released three very interesting news that we in the XR sector should care about:

  • New enterprise NVIDIA RTX A6000 and NVIDIA A40 graphics card have been released;
  • Cloud XR servers are now available on AWS;
  • Omniverse, the collaboration tool for artists, enters open beta.

Let me briefly describe them all, and let me tell you why they are so important!

New enterprise NVIDIA RTX A6000 and NVIDIA A40 graphics card

NVIDIA RTX A6000
NVIDIA RTX A6000 (Image by NVIDIA)

We all know that the new Ampere architecture gives NVIDIA GPUs much more horsepower, especially for what concerns AI and ray tracing. The new RTX30 cards released to consumers have been a blast (apart from a problematic launch), and now it was time for NVIDIA to also upgrade its enterprise offering.

For this reason, NVIDIA has released two new graphics cards:

  • NVIDIA RTX A6000 dedicated to prosumers and enterprises that want to work with very complex scenes to render on their workstations
  • NVIDIA A40 dedicated to Data Centers that wants to exploit it for remote renderings and AI computations.

All the initial partners that have tested these cards have expressed some positive feedback, also because these cards are much more powerful than their predecessors. KPF for instance stated that the new RTX A6000 let them double or triple the resolution of their skyscrapers renderings pretty easily. I think that these cards could also boost significantly the performances of VR in enterprise experiences.

NVIDIA A40
NVIDIA A40 (Image by NVIDIA)

These are the key advantages of the new cards, according to NVIDIA:

The NVIDIA RTX A6000 and NVIDIA A40 deliver enhanced performance with groundbreaking technology, including:
● Second-Generation RT Cores: Delivers up to 2x the throughput of the previous generation, plus concurrent ray tracing, shading and compute.
● Third-Generation Tensor Cores: Provides up to 5x the throughput of the previous generation, up to 10X with sparsity, with support for new TF32 and BF16 data formats.
● New CUDA Cores: Delivers up to 2x the FP32 throughput of the previous generation for significant increases in graphics and compute.
● 48GB of GPU Memory: The largest memory available in a single GPU, expandable to 96GB using NVLink to connect two GPUs.
● Virtualization: With the addition of NVIDIA virtual GPU software such as the Quadro Virtual Workstation we can support graphics workloads and powerful virtual workstation instances at scale for remote users, enabling larger workflows for high-end design, AI, and compute workloads.
● PCIe Gen 4: Provides twice the bandwidth of the previous generation, accelerating data transfers to the GPU for data-intensive workloads like data science, hybrid rendering and video streaming in PCIe Gen 4-enabled servers and workstations such as the Lenovo ThinkStation P620.

If you are a fan of specifications, here you are the ones of the RTX A6000:

GPU Memory48 GB GDDR6 with error-correcting code (ECC)
Display Ports4x DisplayPort 1.4*
Max Power Consumption300 W
Graphics BusPCI Express Gen 4 x 16
Form Factor4.4” (H) x 10.5” (L) dual slot
ThermalActive
NVLink2-way low profile (2-slot and 3-slot bridges)
Connect 2 RTX A6000
vGPU Software SupportNVIDIA GRID®, NVIDIA Quadro® Virtual Data Center Workstation,
NVIDIA Virtual Compute Server
vGPU Profiles Supported1 GB, 2 GB, 3 GB, 4 GB, 6 GB, 8 GB, 12 GB, 16 GB, 24 GB, 48 GB
VR ReadyYes

And there the ones of the A40:

GPU Memory48 GB GDDR6 with error-correcting code (ECC)
GPU Memory Bandwidth696 GB/s
InterconnectNVIDIA NVLink 112.5 GB/s (bidirectional) PCIe Gen4 16 GB/s
NVLink2-way low profile (2-slot)
Display Ports3x DisplayPort 1.4*
Max Power Consumption300 W
Form Factor4.4″ (H) x 10.5″ (L) Dual Slot
ThermalPassive
vGPU Software SupportNVIDIA GRID®, NVIDIA Quadro® Virtual Data Center Workstation, NVIDIA Virtual Compute Server
vGPU Profiles Supported1 GB, 2 GB, 3 GB, 4 GB, 6 GB, 8 GB, 12 GB, 16 GB, 24 GB, 48 GB
NVENC | NVDEC1x | 2x (includes AV1 decode)
Secure and Measured Boot with Hardware Root of TrustCEC 1712
NEBS ReadyLevel 3
Power Connector8-pin CPU

Systems with these cards pre-equipped are coming soon via OEMs and NVIDIA’s external partners:

  • A wide range of NVIDIA RTX A6000-based workstations are expected from the world’s leading systems manufacturers, including BOXX, Dell, HP and Lenovo.
  • A wide range of NVIDIA A40-based servers are expected from the world’s leading systems manufacturers, including Cisco, Dell, Fujitsu, Hewlett Packard Enterprise and Lenovo.

Regarding the availability of these cards, let me quote NVIDIA again:

The NVIDIA RTX A6000 will be available from channel partners including PNY, Leadtek, Ingram Micro, Ryoyo and on nvidia.com starting in mid December. The A6000 and A40 will be available from OEM workstation and server vendors worldwide starting early next year. Check with OEM vendors for details on availability. Support for NVIDIA virtual GPU software, including Quadro Virtual Workstation, will be available early next year.

If you want to know more, you can refer to the official pages of the A6000 and A40.

Cloud XR lands on AWS

This is my opinion massive news, not for the short-term present, but for the mid and long-term future: NVIDIA has partnered with AWS and it is going to release AWS virtual machines with Cloud XR installed and configured out of the box.

Cloud XR is NVIDIA’s solution to provide AR and VR cloud rendering. It is supposed to work over 5G and/or Wi-fi 6 and it lets you use a dull headset to show high-quality VR content that has been rendered on the cloud. This means that you can have a simple standalone headset and show very graphical intensive content (like Half-Life: Alyx) because everything gets rendered on the cloud. Imagine it like Virtual Desktop, but with the rendering not coming from your PC, but from an external server via 5G.

Cloud XR is already available for selected developers that apply on NVIDIA portal, but if you want to install it on the cloud or on your on-premises server, you have to do everything yourself (it is pretty simple, though). Now, thanks to this move, you don’t have to care about anything at all: you can find on AWS a machine already configured with Cloud XR, already with the right specs of GPU, drivers, CPU, etc… It will be available on Amazon EC2 P3 and G4 instances which already support NVIDIA V100 and T4 GPUs. You have just to select a Cloud XR instance in your AWS panel, turn it on and pay for its usage to have cloud rendering for you and your collaborators. This is a huge step to make cloud rendering easier and more popular. But don’t expect it to be cheap… machines for remote rendering are relatively expensive.

I’ve asked NVIDIA about Azure support and they answered me that now they are focusing on AWS, but they plan to collaborate with all major vendors. I think that one of the reasons is that Azure already has its remote rendering solution.

cloud xr 5g rendering
Cloud XR visual (Image by NVIDIA)

The most skeptical of you may wonder “What about latency?”: if Virtual Desktop already adds latency at home, what you can expect from a server somewhere in the cloud? The official answer that NVIDIA has given us is that “In the US, the closest AWS server selected by the system was already enough to have good performances for VR Cloud Rendering”. My personal reasoning is that I can hardly believe it unless you are rendering to a remote PC VR (so no Wi-fi in the middle) that has a big fiber cable that is the same on which there are also the AWS servers. I think that the key is what you define as “acceptable” for the latency. To have effective cloud XR rendering we need edge servers very close to the locations where you want to enjoy the VR experiences, very performant networks, 5G, and Wi-fi 6.

It’s normal that we need time, but it is cool that NVIDIA is experimenting with it, it is cool that some partners are already using it, and it is cool that now setting up a Cloud XR system is easier than ever. In the long run, when all the parts of the puzzle will come together, this will be huge.

NVIDIA CloudXR on AWS will be generally available early next year, with a private beta available in the coming months.

For more info, you can refer to NVIDIA’s official post on the matter.

NVIDIA Omniverse is coming in open beta

An architecture project made with Omniverse (Image by NVIDIA)

This is another huge news for all the teams collaborating remotely: NVIDIA Omniverse is coming to open beta. You may wonder what it is and why we should care… and honestly, I was in your exact situation when NVIDIA during the presentation pitched Omniverse to us as “The Google Docs for artists”. But then I took some time to analyze the matter, and when I understood, I went in awe.

Basically, it is like Google Docs for artists. You can see it in action through the video that I’ve rip…ehm borrowed directly from the NVIDIA website and embedded here below. In the video, there are 3 different artists working on the same project: one is using Unreal Engine to work on the 3D scene and its interactions, the other one is working on what looks like Substance Painter working on materials and textures and the third one is on 3D Studio Max making some 3D models. They have to work on the same project, and the traditional workflow is that everyone is working on a different part and sending to the other collaborators the files he/she has created, maybe with the usual names FINAL, FINAL2, FINAL_FINAL, etc… for every modification. Then there is one integrator that assembles everything in the same project. Or they all can use Git, but it is mostly made for source code, so it works pretty bad with visual elements.

Using Omniverse, everything works like black magic. Look at the video: they are collaborating on the same project, together, remotely, and all the changes made by one are immediately reflected both in the common project (right) and in the application of everyone else that is working on its own part of the experience (left):

How is it possible? Basically Omniverse features various modules that make it possible:

  • Omniverse Connect: it is a series of plugins that connect Omniverse to various applications used for development: Unity, Unreal Engine, Photoshop, 3D Studio, Sketchup, and soon also Blender. You install that plugin and it integrates Omniverse into your application. This plugin is able to convert what you’re working on in the USD format (made by Pixar), that is the only format that can not only convert your scene to a common format, but can also keep inside some proprietary data of the application you’re working on. Every time you have an update, the plugin streams the new scene in USD format, and every time there is an update happening from the outside, you receive the modifications in USD format, and the elements you are working on get updated from the remote side. This way the synchronization works among all the users;
Omniverse Connect connects the application you’re using with the Nucleus. You can also write your plugins to connect your personal application to it (Image by NVIDIA)
  • Nucleus: it is the one that works as the glue of all the plugins described above. It lets people subscribe to a project, it lets people push updates, it decides what updates must be sent to who, it makes the experience of everyone coherent. It is the core of Omniverse, the one that makes things work: it has an internal DB through which it memorizes all the assets of the project and who is modifying them and who has to be notified for their modifications. One super cool thing to keep in mind is that modifications are sent as tiny differential updates: if for instance, I’m working on a big house and I’ve just changed the material of a chair, Nucleus won’t send all the USD of the whole house to everyone, but will just send the instruction to change the chair that I’ve touched. This way working together is more efficient;
  • Omniverse Kit: it is a toolkit for building native Omniverse applications and microservices. It is built on a base framework that provides a wide variety of functionality through a set of light-weight extensions. These stand-alone extensions are plugins that are authored in Python or C++.
  • RTX and simulation engines: with Omniverse, you can work together to create projects that exploit the Omniverse engine, that lets you render beautiful experiences through the RTX graphics or that lets you make AI and Physics simulations (via the PhysX engine) on the USD scenes you are creating together.
Physics simulation with Omniverse

Omniverse can work on premises or on the cloud. The list of beta users includes Pimax, ILMxLAB, Autodesk and other premium brands.

I personally find it amazing: having a way for multiple people to work on the same scene together is a massive boost for the creative workflow. Git is always creating problems with artists (I know it well), and Omniverse could be the right answer. What scares me is the price: this is a premium enterprise solution and we know how enterprise solutions can be very expensive, so I’m afraid little studios won’t be able to exploit this innovation. Anyway, let’s see.

You can sign up for the Omniverse open beta program at nvidia.com/omniverse. It will be available for download this fall.


And that’s it! What is the news that is exciting you the most? Let me know in the comments! (And don’t forget to subscribe to my newsletter)

(Header image by NVIDIA)


Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I'll be very happy because I'll earn a small commission on your purchase. You can find my boring full disclosure here.

Releated

playstation vr 2

The XR Week Peek (2024.03.26): Sony halts production of PSVR 2, Meta slashes the price of Quest 2, and more!

This has been the week of GDC and of the NVIDIA GTC, so we have a bit more pieces of news than the usual because there have been some interesting things announced there. Oh yes, also IEEE VR had its show. Actually, a lot of things happened for tech people in the US, which is […]

We need camera access to unleash the full potential of Mixed Reality

These days I’m carrying on some experiments with XR and other technologies. I had some wonderful ideas of Mixed Reality applications I would like to prototype, but most of them are impossible to do in this moment because of a decision that almost all VR/MR headset manufacturers have taken: preventing developers from accessing camera data. […]