Audio plays a vital role in the the quest of providing a fully immersive interaction; these sounds help to give life to the environments around you whilst transporting your mind and adding a whole new layer of realism to the experience at hand.
Meta have realised this and as such, are in the process of developing new AI models that are designed to make sound more realistic in mixed and VR experiences. But, what does this entail and how exactly is it going to lend to enhanced AR and VR experiences?
New Spatial Audio Tools
Meta’s new spatial audio tools are reportedly able to respond to different environments. Take a look at the video below to check out acoustic syntheses for AR/VE experiences in action:
As you can see in the video above, the work Meta is busy with strongly revolves around the aspect of sound and how the commonalities of such are the things that people expect to experience in different environments.
The three new artificial intelligence models include:
- Visual-Acoustic Matching
- Visually-Informed Deverberation
- VisualVoice
As said by Meta:
“Whether it’s mingling at a party in the metaverse or watching a home movie in your living room through augmented reality (AR) glasses, acoustics play a role in how these moments will be experienced […] We envision a future where people can put on AR glasses and relive a holographic memory that looks and sounds the exact way they experienced it from their vantage point, or feel immersed by not just the graphics but also the sounds as they play games in a virtual world.”
If you’re wondering the particular use case of the above, it could strongly aid in making the metaverse a more immersive place. Considering sound is such a vital part of our everyday lives, we often underestimate the significance it plays in how we experience things – if the Meta Research team ace the development of these new spatial audio tools, it could very possibly re-engineer the way we experience things in the digital world.