After Part 1 It should now be clear that virtual event formats are a great example of how to defy the coronavirus by using virtual reality. Because from the viewpoint of my sound world, there are many other exciting applications in this field that show how virtual audio can offer even more possibilities than reality.
Listening zones have long been a topic in the automotive audio sector. Here I have already illuminated how in the future passengers could experience different listening experiences at the same time and in the same room, without creating a cacophony of sounds for everyone. So who would have thought that this principle would first find resonance in virtual reality?
In Laval Virtual 2020, the virtual event tool “virBELA” has already been implemented as a listening zone variant. Different areas could be entered where individual acoustic physics applied. Only those who were also in the area could hear what was being said in the room. Even if you were standing not far from the door – where in reality you would here everything – you couldn’t understand a word here. You could only see the VR avatars in this area. An interesting concept that is not so easy to implement in reality.
Having a private conversation in a public conference can be difficult enough. There has to be a retreat on site. But with SocialVR spaces you can simply teleport quickly into one of the listening zones and continue talking undisturbed – how convenient.
I don’t know if it’s just me, but when I am sitting in a presentation, I often feel the need to discuss the new information directly with the person sitting next to me. In reality, however, one has to refrain from doing so in order not to disturb the speaker and everyone else present. Virtual audio and the concept of listening zones could also be a good solution here.
In VR events it would, therefore, be possible to “play” commentator in real-time. I have nothing against frontal lectures and I don’t want to claim that my lectures are exceptionally entertaining. But this way the whole thing would have a more interactive character for me. I love to participate in TV quiz shows or to share more or less qualified comments on Twitter with my friends during football matches.
Currently, however, it is still relatively difficult to find out which avatars can hear you in virtual space. If you whisper in the audience during a real lecture, chances are good that no one else can hear you. In VR, however, the whole room may hear it – oops! So it’s still better to just sit still and save your questions for the end. But with the right VR event software more would be possible.
I like to give workshops and lectures, but one thing is especially difficult for me as a sound designer: to showcase audio samples. Unfortunately, especially the topic “immersive audio” is only easy to understand once you have heard it. But the conditions with only simple loudspeakers and a noisy atmosphere are usually not optimal at normal VR conferences – virtual audio would be a good solution!
At AES (Audio Engineering Society) or other audio events, one often has the luxury of having first-class reproduction systems in every lecture room. There are speaker arrays with 7.1.4 Dolby Atmos or dozens of headphones for the entire audience.
But with virtual event platforms, it is now very easy for me to make 3D audio tangible. Because everyone has headphones at home. So you would only need one way to send a stereo signal via the Livestream. And listeners can enjoy binaural demos without having to be physically present.
In the next step, it would even be possible to work in combination with VR glasses and head-tracking. In other words, one would not only show a generic stereo example, but everyone could experience the feeling of “the sound revolves around me”. This is achieved by using multi-channel surround formats or object-based audio, for example. In short: I have the feeling that virtual audio, in the form of 3D-Audio-Demos, could boom all the more through VR event platforms!
Since LinkedIn is more of a professional social media platform, I’d rather show the silly things you can do on my homepage. I’m sure the list can be continued. So if anyone else has had similar experiences here, please share them with me!
Partly, it’s the young technologies fault, partly it’s your own. For example, when I once sat down on a virtual chair in a normal way, my arms went up for the people sitting next to me for no reason. Everything looked normal to me. But for those involved, it looked as if I was just trying to dance “Thriller” while sitting down. In this case, it was probably because I was using the VR application with the Oculus Quest while other people were present with a desktop application.
Tracking is also still in its infancy. But the development of technology is rapid. But mostly only the hand movement is imitated and the computer interpolates what the arms would probably do. So even if you move normally, your arms may flutter all over the place.
Probably the sound nerd in me is acting here, but in VR conversations take place differently than in reality. In VR I rarely look at my conversation partner, but I listen all the more. I often turn my head away to hear how to locate the sound source. I also tend to teleport back and forth to judge the sound everywhere. How far away can I turn my head and still understand what is being said?
The interesting thing is that it’s not just me. So you have to get used to the fact that you talk to people and they listen to you but do strange things in the meantime.
Spatial audio helps people turn away less, as I found out in my case study. But in VR, both eye contact and lip movements are important. But in most cases, eye-tracking or a conversion of the mouth movement depending on the sound is still missing. Therefore, avatars in VR often have sunglasses on or a headset in front of their mouth. In this way, one does not move in the “uncanny valley”, i.e. the acceptance gap, where we would find virtual conversation partners pleasant or disturbing.
The ViveProEye already seems to work as a Plug’n’Play device, but the technology still has to establish itself with the consumer.
Currently, however, there are already providers such as High Fidelity, Sansar, NEOS, and Sinespace who have implemented automatic animations. For example, the face, the avatar, the mouth, and also the eyes are moved in real-time when talking. The mouth moves according to the waveform and the eyes are automatically directed to the other speaker.
With some SocialVR providers, this works as well as a dear expert told me:
“It happened that I told people half my life story and was thrilled what good listeners they are. The eye contact, the nodding, but no talking in between, it was all there. Only to hear later: “Hey I’m back. Sorry, I was away for a moment. Did you say something?”
So you see or hear: Virtual events are incredibly complex from a sound point of view and offer many possibilities to use this booming terrain. I’m happy to help you unfold the full potential of virtual audio.Missed part 1? Check it out