game audio film sound design
NewsStorytelling

How Game Audio can influence Film Sound Design

Content

    Written by: Luca Spadavecchia

    The PlayStation press conference that occurred in June 2020 showed a preview of gameplay and cinematic scenes of some announced game titles. And the impact was massive.

    Why am I bringing this up? For the answer, we need to take a step back. Earlier that year, Mark Cerny presented the technological progress of the Playstation 5. On this occasion, the PS5 lead system architect revealed that the next console will have a custom hardware unit dedicated to audio: TEMPEST 3D AudioTech at 38:30 in the video.

    As an audio enthusiast and sound designer, I had to process this information. That is why I decided to use this argument as the main topic of my Master of Arts Dissertation. Let me explain how this is relevant to you.

    What can a Sound Designer add to the New Consoles’ Game Audio?

    The creators of the new generation consoles, PS5 and Xbox series X, announced that the new built-in audio card will allow them to develop more detailed sonic worlds. The consoles engineers managed to solve most of the problems related to sound memory storage, which frees space for sound designers’ creativity. This information made me think about the possibilities of analyzing crucial audio workflows in both film and game productions.

    The audio description of a space has a pivotal role in the viewer’s immersion. The more accurate the sounds will appear in the space the easier the audience can pinpoint the position of the sources. But how a sound designer can achieve this?

    Object-based Mixing

    A sound designer aims to create soundscapes that allow the viewer to extend the screen boundaries. There are many techniques to achieve this goal, but the one I want to highlight is the object-based mixing technique. It consists of encoding all the sound sources that describe the landscape. These sources have parameters that are registered as metadata. They indicate the specific coordinates to allow locating the objects accurately.

    The audio format delivered in the aforementioned PS5 press conference was binaural. This format allows you to perceive the height of a soundscape. Therefore, the object-based mixing technique suits perfectly this format since it allows to accurately position the sound sources within the landscape. For the first-ever time, the PlayStation users could experience a taste of the TEMPEST 3D AudioTech.

    This made me wonder something: if the gamers are experiencing gameplay in binaural format, won’t they notice the differences when they will listen to different audio formats delivered on most of the streaming websites in the market? I do, and that’s why I started comparing game scenarios with film sequences to understand why some of the software and workflows used in-game houses are not yet integrated into the film industry. Can binaural audio become a standard in film production?

    game audio film sound design

    Film and game audio listening scenarios

    Object-based is the technique used in video games to create real-time changes within the sonic world. The game industry avails itself of game engines (i.e. Unity and Unreal) that are able to communicate with middleware (Wwise, Fmod) in order to assign a sound to every moving or stable source in the landscape. Whilst this is a standard for game production workflows it is not for films where even if some directors are starting to work with game engines the middleware is still not used.

    I want to show you some similar scenarios in which, in my opinion, films got influenced by games mixing techniques. So, grab your headphones, and let’s listen to some examples.

    Ratchet and Clank: Rift Apart’

    Ratchet and Clank: Rift Apart’s Marcus Smith, during the “Sony PlayStation 5 Reveal Event” affirms that the binaural audio characteristics supported by the new generation console represent a fundamental difference between listening to a television set and playing. He says that listening to this new gameplay it’s like going out in the middle of the forest, it brings you into these worlds in a way that you have never been able to do before.

    In Ratchet and Clank: Rift Apart gameplay the use of the object-based mix is evident. The voices in the mix are perceived according to the rotation angle of the main character’s head. This doesn’t just enhance the gamer immersion, thanks to this audio format we can perceive the height of the spaceship passing from above Ratchet. The sonic world is described as a three-dimensional space. How cool is that? The sound designer can let us hear exactly what the main character it’s hearing.

    Gravity Film Sound Design compared to Star Wars Battlefront II Gameplay

    Cuaron’s Gravity presents similar mixing techniques. At minute 3:50 the viewer sees the astronaut coming from the right of the scene to the left, following his path towards the back. Audio-wise, this scene is described as the audience is listening from where the camera is positioned: the astronaut dialogue is tracked with his body throughout his movement in the space. The listener point-of-audition is located in the POV, which theoretically means this is the same mixing technique used in the aforementioned game.

    In Star Wars Battlefront II gameplay the perception of the auditory space drastically changes between the first-person shooter mode and the third-person shooter mode, which occurs by moving the camera behind the player to have a wider view. When the player chooses the first mode the sound is muffled as the character is in the spaceship. In the second mode, the spectator is always immersed in the sonic world but they are looking at the action from a different POV, hence, they listen to the vastness of the space from outside the spaceship.

    The video below represents the interior of the astronaut helmet; the spectator is, therefore, hearing/seeing what the character is experiencing. We are not only perceiving the auditory space. The emotional status is rendered by the use of a low-pass filter that conveys confusion. Gradually, the filter disappears as the camera moves the frame outside of the helmet: here we perceive the scene as a second astronaut, immersed in the three-dimensional space but further from the main character.

    Shadow of Colossus

    In Shadow of Colossus, the player apprehends the action from a third-person camera angle. When the character goes underwater the player understands it thanks to the splash sound effects. But if the camera doesn’t go underwater, the gamer will still hear the scene from outside of the water. Once the camera reaches the character underwater, the player perceives the ambiance as they were immersed in the water. Hence, The Point-of-Audition is connected with the Point-of-view.

    Blade Runner 2049

    Blade Runner 2049 has similar action scenes. The one displayed below describes a fighting scene where a character is trying to drown his opponent. How the sound mixer designed the audio follows the same concepts of Shadow of Colossus.

    Conclusion

    The film industry is already taking advantage of game software, mostly for reaching outstanding visual results. The Mandalorian, Spielberg’s Ready Player One, and Villeneuve’s Blade Runner 2049 used Unreal Engine to visually look at the VFX results in real-time through the virtual camera plug-in directly on-location. Why the sound isn’t considered yet in this process?

    We have seen how the game experience is not so far from the film one. Now that the game audio is taking a step further, I’m expecting the film audio will react reaching another level.

    Let me help you step up your Game

    This website uses cookies. If you continue to visit this website, you consent to the use of cookies. You can find more about this in my Privacy policy.
    Necessary cookies
    Tracking
    Accept all
    or Save settings