Yesterday, Google announced its new spatial audio SDK, Resonance Audio, for adding immersive audio to VR, AR and 360 video projects.
The new SDK works across iOS, Android, Windows, Mac and Linux and is already integrated with Unity and Unreal.
According to Google, Resonance Audio utilizes higher order ambisonics to spatializeMix in post-production to distribute sounds into positions in a sphere around the listener. “hundreds of simultaneous 3D sound sources.”
Developers will be able to specify the sources of each sound within the scene and place those sounds at the appropriate distance–even as viewers move within the space–but will also adjust the sound to the situation.
Meaning, if you were in front of a lion roaring within a VR experience and then walked closer, it would become louder, but if you walked behind the lion, the sound would automatically soften to appropriately adjust to your new distance and location.
And, supposedly, the new SDK can do all of that without putting a drain on viewers’ CPUs.
“There are often few CPU resources allocated for audio, especially on mobile, which can limit the number of simultaneous high-fidelity 3D sound sources for complex environments,” Google wrote in its announcement of the new SDK, adding that Resonance Audio uses highly optimized digital signal processing algorithms to deliver spatial audio “without compromising audio quality, even on mobile.”
“We’re also introducing a new feature in Unity for precomputing highly realistic reverb effects that accurately match the acoustic properties of the environment, reducing CPU usage significantly during playback,” Google writes.
What sets it apart from Facebook and Oculus’ spatial audio encoding technology is that Resonance Audio offers third-order ambisonics, whereas the former offer second-order ambisonics.
We hope to share a post truly explaining that difference from an actual expert very soon, but the gist of it is that higher-order ambisonicsAmbisonic audio with more than 4 channels. offer increasingly better audio resolutionThe number of pixels in an image, typically presented as a ratio of the total pixels on x axis to the total pixels on th... More. Each step up the ladder increases the usable listening area for the audio.
Explaining that in greater detail is beyond my knowledge and abilities, so all I’ll say is higher is better and we’ll save the real work for the experts!
To help illustrate the difference, though, Google launched a new release for Rift and Vive called Audio Factory.
The SDKs are available now on GitHub, and developers can choose a platform, check out tutorials and learning resources, and ask questions within Resonance’s forums here.