AMD Launches Open Source Ray Traced VR Audio Tech ‘TrueAudio Next’

17

AMD has announced TrueAudio Next a “scalable” physics-based audio rendering engine for generating environmentally accurate, GPU accelerated audio for virtual reality.

AMD has announced a set of key technologies to bolster its open source technology arsenal represented by GPUOpen, this time in the field of immersive VR audio. TrueAudio Next, AMD claim, provides “real-time dynamic physics-based audio acoustics rendering” and that any soundscape can now be modelled physically, taking into account reflection and occlusion.

With GPUOpen and LiquidVR, AMD continues to pitch its tent in the open source camp, a reaction to its main rival NVIDIA’s approach which focuses largely on proprietary, GPU hardware and driver locked Gameworks VR (now known as VRWorks) initiatives and technologies – i.e. things that will only work if you develop for and buy their graphics cards.

“We are excited about the potential of TrueAudio Next,” says Sasa Marinkovic, Head of VR and Software Marketing at AMD, “It enables developers to integrate realistic audio into their VR content in order to achieve their artistic vision, without compromise. Combining this with AMD’s commitment to work with the development community to create rich, immersive content, the next wave of VR content can deliver truly immersive audio – that will sound and feel real.”

trueaudio-next-vr-1

AMD’s GPUOpen and Liquid VR are equivalent to NVIDIA’s GameWorks and VRWorks in that they both provide frameworks upon which to build VR games, the difference with the former is that as a developer, if you want to poke around in the source to work out why something is working a certain way, you can download the source straight from GitHub.

SEE ALSO
Meta is Working on an Airplane Travel Mode for Quest

See Also: Nvidia’s VRWorks Audio Brings Physically Based 3D GPU Accelerated Sound

AMD’s TrueAudio Next is built atop Radeon Rays (formerly AMD FireRays), the company’s high efficient GPU accelerated ray tracing software. Traditionally used for graphics rendering, ray tracing also has important uses in the world of audio and modelling the physical properties of a virtual environment and the audio that resonates within. This means more accurate, realistic sound for virtual reality applications and games and, in theory, more chance to achieve psychological immersion.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


Based in the UK, Paul has been immersed in interactive entertainment for the best part of 27 years and has followed advances in gaming with a passionate fervour. His obsession with graphical fidelity over the years has had him branded a ‘graphics whore’ (which he views as the highest compliment) more than once and he holds a particular candle for the dream of the ultimate immersive gaming experience. Having followed and been disappointed by the original VR explosion of the 90s, he then founded RiftVR.com to follow the new and exciting prospect of the rebirth of VR in products like the Oculus Rift. Paul joined forces with Ben to help build the new Road to VR in preparation for what he sees as VR’s coming of age over the next few years.
  • Raphael

    I disagree.

    • Jad

      No one cares.

      • Raphael

        I disagree.

        • Jad

          No one cares.

          • Raphael

            I disagree.

          • I care.

          • Mageoftheyear

            Aren’t you a little short for a Stormtrooper? Oh wait… *looks in mirror*

  • Jack H

    Is the density of rays required related to the maximum audible frequency?

    • Max Hayes

      It wouldn’t be. The density of rays serves to simulate all the different reflections a sound goes through before reaching our ears. How far each ray has to travel, the size of the openings it has to travel through, and the physical characteristics of the material they are bouncing off of will change the frequency content of each “ray” of sound. All these rays would then be summed at the listener for a realistic environment effect.

  • Robin S

    Yes! Let’s play fake games to get “real” psychological immersion! Let’s make killing as real as we can you know, for the rest of us who can’t kill IRL. And you could even get laid!

    • Francesco Caroli

      please if you don’t know what you’re saying don’t comment

      • Robin S

        just because you don’t like what i’m saying….

        • Francesco

          No, simply there is people, like me, who NEED this things (not kill people but a full dive game).

    • War; huh; what’s it good for? Bloody great games for starters. Lighten up.

  • I’m actually excited about this from a musical perspective. Listening to (and seeing?) live concerts could be a really great experience if the acoustics of the venue and crowd were captured and then recreated by the physics model. One could also do a lot of interesting things to music recorded in the studio as well if a true feeling of depth were taken into account.

  • Francesco Caroli

    I’m still waiting the FullDive Tech…

  • Nashoba Darkwolf

    Another reaction to Nvidia that will get implemented in 5 games if we are lucky. Where is the coffee so I can fight this urge to yawn. I remember back in the day when ATI actually made some amazing advancement in GPU tech. Anyone remember the revolution of Unified Shader Pipelines? AMD spearheaded that with the GPU in the Xbox 360 and changed the landscape for GPUs. Now… well, its not hard to see them playing copycat. Sure we have catch phrases like Mantle, but every promise they made has woefully under-delivered.