Jason-RiggsOSSIC debuted their latest OSSIC X headphone prototype at CES this year with one of the best immersive audio demos that I’ve heard yet. OSSIC CEO Jason Riggs told me that their headphones do a dynamic calibration of your ears in order to render out near-field audio that is customized to your anatomy, and they had a new interactive audio sandbox environment where you could do a live mix of audio objects in a 360-degree environment at different heights and depths. OSSIC also was a participant in Abbey Road Studio’s Red Incubator looking at the future of music production, and Riggs makes the bold prediction that the future of music is going to be both immersive and interactive.

LISTEN TO THE VOICES OF VR PODCAST

We do a deep dive into immersive audio on today’s podcast where Riggs explains in detail their audio rendering pipeline and how their dynamic calibration of ear anatomy enables their integrated hardware to replicate near-field audio objects better than any other software solution. When audio objects are within 1 meter, then they use a dynamic head-related transfer function (HRTF) in order to calculate the proper interaural time differences (ITD) and interaural level differences (ILD) that are unique to your ear anatomy. Their dynamic calibration also helps to localize high frequency sounds from 1-2 kHz when they are in front, above, or behind you.

SEE ALSO
Up Close With Sennheiser's $1,700 VR Microphone

Riggs says that they’ve been collaborating with Abbey Road Studios in order to figure out the future of music, which Riggs believes that is going to be both immersive and interactive. There are two ends of the spectrum from audio production ranging from pure live capture and pure audio production, which happens to mirror the differences between passive 360 video capture and interative, real-time CGI games. Right now the music industry is solidly in the static, multi-channel-based audio, but that the future tools of audio production are going to look more like a real-time game engine than the existing fixed perspective and flat-world, audio mixing boards, says Riggs.

OSSIC has started to work on figuring out the production pipeline for the passive, pure live capture end of the spectrum first. They’ve been using higher-order ambisonic microphones like the 32-element em32 Eigenmike microphone array from mh acoustics. They’re able to capture a lot more spatial resolution than with a standard 4-channel, first-order ambisonic microphone. Both of these approaches capture a sound sphere shell of a location with all of it’s directed and reflected sound properties that can transport you to another place.

But Riggs says that there’s a limited amount of depth information that can be captured and transmitted with this type of passive and non-volumetric ambisonic recording. The other end of the spectrum is pure audio production, which can do volumetric audio that is real-time and interactive by using audio objects in a simulated 3D space. OSSIC produced an interactive audio demo using Unity that is able to produce audio in the near-field of less than 1 meter distance.

The future of interactive music faces similar challenges to the similar tension between 360 videos and interactive game environments, which is that it’s difficult to balance the user’s agency with the process of creating authored compositions. Some ways to incorporate interactivity with a music experience is to allow the user to live mix an existing authored music composition with audio objects in a 3D space or to play an audio-reactive game like AudioShield that creates dynamic gameplay based upon the unique sound profile of each piece of music. These are ways to engage the agency of the user, but neither of these actually provide any meaningful way for the user to impact how the music composition unfolds.

Finding that balance between authorship and interactivity is one of the biggest open questions about the future of music, and no one really knows what that will look like. The only thing that Riggs knows for sure is that real-time game engines like Unity or Unreal are going to be much more well-suited to facilitate this type of interaction than the existing tools of production of channel-based music.

Multi-channel ambisonic formats are becoming more standardized for the 360-videos platforms on Facebook and Google’s YouTube, but there is still only output binaural stereo output. Riggs says that he’s been working behind the scenes to provide higher level fidelity outputs for integrated immersive hardware solutions like the OSSIC X since they’re currently not using the best spatialization process to get the best performance out of the OSSIC headphones.

As far as formats for the other end of pure production, there is no emerging standard for an open format of object-based audio. Riggs hopes that eventually this will come, and that there will be plugins for OSSIC headphones and software to be able to dynamically change the reflective properties of a virtualized room, or to be able to dynamically modulate properties of the audio objects.

As game engines eventually move to real-time, physics-based audio propagation models where sound is constructed in real-time, Riggs says that this will still need good spatialization with integrated hardware and software solutions otherwise it’ll just sound like good reverb without any localized cues.

SEE ALSO
Nvidia's VRWorks Audio Brings Physically Based 3D GPU Accelerated Sound

At this point, audio is still taking a backseat to the visuals with a limited 2-3% budget of CPU capacity, and Riggs hopes that there will be a series of audio demos in 2017 that show the power of properly spatialized audio. OSSIC’s interactive sound demo at CES was the most impressive example of audio spatialization that I’ve heard so far, and they’re shaping up to be the real leader of immersive audio. Riggs said that they’ve got a lot of feedback from game studios that they don’t want to use a customized audio production solution by OSSIC, but they want to use their existing production pipeline and have OSSIC be compatible with that. So VR developers should be getting more information for how to best integrate with the OSSIC hardware in 2017 as their OSSIC X headphones will start shipping in Spring of this year.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • Raphael

    This product has been heavily promoted since kickstarter… I have always wanted headphones with 3d positional sound rather than the fake enhanced spatial effect “surround” headphones I’ve tried in the past. To put it another way… No headphones I’ve heard thus far are able to “project” sound in front of the listener. If the OSSIC really works then I will certainly want it. I like the look of them as well. Always preferred large headphones that cover the ears.

    I hilarious totally random thought just popped into my largely vacuous head…

    Wouldn’t it be funny if nausea sensitive gamers started complaining these headphones gave them nausea due to fast moving 3d sounds?

    • OgreTactics

      “complaining these headphones gave them nausea due to fast moving 3d sounds?” is this even a thing?

      • Raphael

        It’s not yet a thing but it wouldn’t surprise me… People always find new ways to feel dizzy.

    • Daniele Kirylo

      Nausea in VR is caused by your brain not being able to correlate the moving scene in front of your eyes and the absence of pressure/balance compensation that your ear would feel in real life.

    • Robert Dyet

      people with an inner ear imbalance do suffer dreadfully from nausea my mother suffered for 18 yrs with Ménière’s

      Ménière’s disease is a condition of the inner ear affecting balance and hearing. It is thought to be caused by unusually large amounts of a fluid called endolymph creating a build up of pressure in the inner ear.

      This pressure can cause the inner ear to send abnormal messages to the brain resulting in dizziness, vomiting and dulled hearing. The exact cause of Ménière’s disease is not known but there may be links to circulation problems, viral infections, allergies, the immune system, migraines, or genetic factors

      I am a kickstarter for this product which I await anxiously for.

  • Albert Hartman

    Got a demo of their headsets. It was ok, but I’m not sure how popular another custom set of headphones will be. On the other hand, I saw a great software-only demo by 3dsoundlabs at CES that worked nearly as well. I think software-only solutions utilizing existing customer-owned headphones will prove the most attractive.

    • Raphael

      Well… there aren’t any software solutions that can project sound in a 360 sphere. I don’t know if the OSSICS can do that or if they’re no different to all of the other “surround” headphones, but I do know that none of the software solutions are able to project sound in anything like a 360 sphere.

      • Tim

        Then I wonder if they are creating the HRTF based on each users ears? That would be some reason to buy hardware for this!

        • Raphael

          Apparently the headphones calibrate to each user from what i understand.

        • danielsdesk

          This is basically what they are doing; there are hardware sensors in the headphones that do measurements on your ears, and then they have software that works with that information. So what they are doing can actually customize per user

      • Pui Ho Lam

        Dolby Atmos can render sound from any distance and angle for any set of surround speakers/transducers. The thing with Ossic is that they have dedicated hardware on it and it has multiple speakers packed in the headphone to produce a more accurate surround sound experience.

        Also they will have a special rendering engine that does not mix audio using HRTF to be 7.1 and produce a virtual 7.1 system (going through 2 virtualisation layers), but instead directly render virtual surround sound.

        • Raphael

          There is only one question… can it project sound in front of the listener using headphones? None of the existing systems available to consumers can do that thus far. Virtual 7.1 is pretty much another gimmick at this point.

          • Pui Ho Lam

            I have listened to virtual 7.1 and sound do come from different directions. The problem with virtual 7.1 is that your are virtualising sound to come from 6 directions, not directly from where the sound source would be. A 7.1 is already a virtual sound system for 360 sound.

          • Raphael

            The virtual 7.1 I have heard cannot project sound in front of you when using headphones. So what system have you heard that can?

            I tried the zalman surround and roccat kave headphones many years ago.

            Also sound blaster CMSS 3d along with numerous software solutions for positional audio. All of which failed to produce 3d sound and were not able to project sound anywhere but the sides or seemingly behind.

          • Pui Ho Lam

            I tested it on my Razer tiamet 7.1 in stereo mode but it would work a lot better with openback headphones with good soundstage.
            https://www.youtube.com/watch?v=2BxO9cd-sYA
            A good video to see different software solution for yourself.
            Also it entirely depends on the shape of your ear. If the structure of your ears is too far off from the average ears, then the effect won’t be as good.

          • Raphael

            Thanks for information Put. Yes I guess my ears fall outside average design. My ear shape is unusual. Live long and prosper…

  • mbze430

    I’m really skeptical about sound and sound hardware, specially on things you can’t demo. Will have to wait for release and crowd reviewers.

  • user

    i still listen to music in mono sometimes. most of the times i move around in my apartment, so stereo is useless anyway.
    3D audio… if i imagine a future in which i wear heaphones all the time, i can see how musicians could have ideas about where certain sounds should come from. but when an instrument is on the right of me and when i turn around, it turns with me, isnt that confusing? shouldnt it stay where it is?
    and you could do that, ofc. analyze the room and no matter where i am, the instruments stay in their places. but what if i leave the room? why would i prefer them to stay where they are? they need to move with me all of a sudden. so the software would have to map the whole apartment and make a setup for all rooms and switch between them. or let the user choose to only set it up for one room and go back to stereo whenever i leave that one room.
    it would be nice to have a visualization of instruments somewhere… if i could dedicate a shelf on one side of the room, a table and a sideboard on other sides of the room, as places where little versions of instruments would appear. that would be nice. would be a nice way to learn about instruments if i maybe could even interact with them if i want and look up more information. i definitely think that everybody should have access to all these instruments (in a virtual form) from a very young age.
    when i start a song, the instruments should appear on the side of the room (at the places that i dedicate to visualizations in apps) where the music producer intends them to be. and then i could choose to move them if i want to experiment with it.
    that said, i have no idea what ossic does but i might listen to the podcast later.

    • kevmolio

      Confusing? I think you mean mind-blowing.