Google seems serious on improving the audio experience everywhere irrespective of what device does user prefer. An immersive audio experience can do wonders to the user perception. With this motive Google announced a new SDK called Resonance Audio. This SDK will make flat things look more fine and alive by ensuring high-fidelity 3D sound in […]
Google seems serious on improving the audio experience everywhere irrespective of what device does user prefer. An immersive audio experience can do wonders to the user perception. With this motive Google announced a new SDK called Resonance Audio. This SDK will make flat things look more fine and alive by ensuring high-fidelity 3D sound in VR, AR and 360-degree videos.
Resonance Audio will simulate interaction of our ears with sound waves in real world and provide a rich sound sound experience for engaging an engaging audio experience to make user realize that as if he is present at that location in reality.
It was a challenge to enrich the existing audio experience for VR, AR and gaming without compromising the performance. In mobile device there are limited number of CPU are allocated for audio that means less number of simultaneous high-fidelity 3D sound sources but, this recently released SDK uses higher order Ambisonics based digital signal processing algorithms to smartly spatialize multiple simultaneous 3D sound sources and don’t effect the audio quality. It is integrated with game giants Unity and Unreal.
This SDK empowers developers to provide amazingly engrossing audio experience as cross-platform SDKs for various digital audio workstations (DAW) and popular game and audio engines are released with Resonance Audio for seamless workflow. The SDKs are compatible with Android, iOS, Linux,Windows and MacOS platforms. Interestingly sound designers will be able to efficiently monitor spatial audio using DAW plugin for apps developed on behalf of Resonance Audio SDKs as well as save time.
Difference in Sound at Different Location
Google’s Resonance Audio will allow developers to manage direction acoustic waves that are released from sound sources. Consider yourself standing behind a barking dog then you will have a light sound experience in comparison to when you stand in front of the same dog as then sound will be a bit louder based on change in your location to sound source.
Another exciting feature of SDK is as sound source get nearer it will auto render near-field effects which will provide a much better almost real time experience of distance between user and sound source irrespective of how close is it with respect to user. It is different quite different from Facebook and Oculus’ spatial audio encoding technology that provide second-order ambisonics as it offers third-order ambisonics.
All those developers who find this SDK inspiring and are planning for generating rich sound experience in their next gaming app must check out the Resonance Audio documentation on Google developer site and can share with google what they build using #ResonanceAudio.
Singsys is an established entity in ever expanding arena of mobile, web and e-commerce solution development. We have developed Headquartered in Singapore with its offshore arm in India. The Singsys is hub of dedicated developers who are passionate about leveraging the industry trends and implementing it to best in all the projects.
You may be interested in:
Nov 27th, 2017