• News
  • Features
  • Events
  • Reviews
  • Q&As

Google launches Resonance Audio spatial audio SDK. How does it compare to Facebook’s solution?

News posted on 7th Nov 2017 4:37pm by Sarah Redohl 1 Comment

  • spatial audio
  • Twitter
  • Facebook
  • Twitter
  • Twitter
Image courtesy of Google

Image courtesy of Google

Yesterday, Google announced its new spatial audio SDK, Resonance Audio, for adding immersive audio to VR, AR and 360 video projects.

The new SDK works across iOS, Android, Windows, Mac and Linux and is already integrated with Unity and Unreal.

According to Google, Resonance Audio utilizes higher order ambisonics to spatializeMix in post-production to distribute sounds into positions in a sphere around the listener. “hundreds of simultaneous 3D sound sources.”

Developers will be able to specify the sources of each sound within the scene and place those sounds at the appropriate distance–even as viewers move within the space–but will also adjust the sound to the situation.

Meaning, if you were in front of a lion roaring within a VR experience and then walked closer, it would become louder, but if you walked behind the lion, the sound would automatically soften to appropriately adjust to your new distance and location.

And, supposedly, the new SDK can do all of that without putting a drain on viewers’ CPUs.

“There are often few CPU resources allocated for audio, especially on mobile, which can limit the number of simultaneous high-fidelity 3D sound sources for complex environments,” Google wrote in its announcement of the new SDK, adding that Resonance Audio uses highly optimized digital signal processing algorithms to deliver spatial audio “without compromising audio quality, even on mobile.”

“We’re also introducing a new feature in Unity for precomputing highly realistic reverb effects that accurately match the acoustic properties of the environment, reducing CPU usage significantly during playback,” Google writes.

What sets it apart from Facebook and Oculus’ spatial audio encoding technology is that Resonance Audio offers third-order ambisonics, whereas the former offer second-order ambisonics.

We hope to share a post truly explaining that difference from an actual expert very soon, but the gist of it is that higher-order ambisonicsAmbisonic audio with more than 4 channels. offer increasingly better audio resolutionThe number of pixels in an image, typically presented as a ratio of the total pixels on x axis to the total pixels on th... More. Each step up the ladder increases the usable listening area for the audio.

Explaining that in greater detail is beyond my knowledge and abilities, so all I’ll say is higher is better and we’ll save the real work for the experts!

To help illustrate the difference, though, Google launched a new release for Rift and Vive called Audio Factory.

The SDKs are available now on GitHub, and developers can choose a platform, check out tutorials and learning resources, and ask questions within Resonance’s forums here.

  • Twitter
  • Facebook

About Sarah Redohl

Sarah Redohl is an award-winning new media journalist focusing on mobile and 360 experiences. Her work has been featured on the Travel Channel and National Public Radio, among others. She has also been recognized as one of Folio: Magazine’s 15 Under 30 young professionals driving media’s next-gen innovation.

See all posts by Sarah Redohl

Follow Immersive Shooter

  • Twitter
  • Facebook
  • YouTube
  • RSS

About Immersive Shooter

Immersive Shooter is the go-to resource for immersive journalists and documentary professionals producing 360 video and VR content.

We share industry news, gear reviews, interviews with top VR journalists and filmmakers, tutorials and more.

Site links

  • About Immersive Shooter
  • Advertise with us
  • Contact us

Copyright Immersive Shooter © 2023