This project sits at the intersection of engineering, perception, and product strategy. It explores how immersive headphones, rendering engines, and spatial-audio technologies shape the way we hear and listen – not in theory, but in real life.
My role was to provide the independent perspective companies increasingly need: the combination of technical expertise, critical listening, and strategic thinking that turns complex audio behaviour into experiences people instantly understand.
Here is how the client described my work:
“Martin is an expert in spatial audio who has a deep understanding of the technology and its impact on the listening experience. He brings a clear view of the use cases and how they will evolve in the future.
And to top it off, he is a pleasure to work with!” – Chad Lucien, VP & GM Sensors and Audio BU at Ceva, Inc
These learnings were then linked to a marketing-driven use case and translated into two in-depth articles.
Both texts bridge the gap between technical insight and real-world positioning and show how spatial audio behaves in everyday listening situations – and what that means for next-generation headphones. They were published here:
The market was evolving faster than its understanding
Spatial audio entered the audio world with force, but the understanding of what truly makes immersive headphones immersive lagged behind. Brands promoted features based on technology, not on perceived sound quality.
Users were promised a wider soundstage, more bass, or more realistic vocals – but no one defined clear criteria that make these experiences credible.
This became especially visible in categories ranging from in-ear monitors to audiophile headphones such as the Focal Clear MG, Focal Utopia, or classic Sennheiser models.
Even the best audiophile headphones struggled to explain why their sound signature delivers a different kind of realism compared to competing headphones or speakers.
Spatial audio demanded a new language – one that connects acoustics, perception, and engineering. That language didn’t exist yet.
Manufacturers were in a race to ship new devices with head-tracking, upmixing, adaptive EQ, and advanced noise control. But without an independent framework, teams couldn’t explain:
why one system externalises better
how sound interacts with movement
which parameters shape clarity, balance, and space
what actually makes an immersive experience consistent
This wasn’t just a technical gap. It was a strategic one. Teams needed a way to link DSP decisions to real-world perception, user expectations, and product differentiation.
In short: they needed acoustical consultants who could translate complex technology into human experience – not just more features.
Listeners weren’t interested in algorithms. They cared about what they actually hear when they listen. Do the voices anchor cleanly? Does the bass stay controlled? Does the soundstage collapse when they move their ear? Does the environment feel believable, or does everything drift?
The difference between “spatial audio” and truly immersive rendering often came down to one bit of tuning, one reflection, one head-tracking behaviour.
Without careful calibration, even premium open-back designs or high-end amp pairings couldn’t deliver the intended impact.
Users needed better products – and companies needed guidance to build them.
Inside product teams, everyone is very close to their own project. Engineers focus on DSP, designers on UX, marketing on positioning. What was missing was an external expert voice that could connect these worlds and evaluate them without bias.
Someone who understands buildings, rooms, acoustics, tuning, psychoacoustics, microphone behaviour, gaming use cases, music expectations, accessibility, and product strategy.
Someone with enough technical expertise to challenge assumptions – and enough storytelling skill to make the results understandable across disciplines.
That’s where my work began: not as a reviewer, not as a marketer, but as a consultant bridging perception, engineering, and business – helping teams build spatial-audio products that actually live up to their promise.
Comparative tests grounded in real listening – not marketing claims
The first step was to examine how different ecosystems – from audiophile headphones like the Focal Clear MG and Focal Utopia to consumer headphones and in-ear monitors – actually reproduce sound in real environments.
I compared multiple devices and rendering engines side by side and analysed how each system handled:
head-tracking stability
externalisation and soundstage formation
bass distribution and noise control
timbre shifts in vocals
clarity of voices under movement
spatial collapse at low volume
differences between open-back design and closed constructions
This allowed me to understand how immersive headphones behave under real human behaviour: walking, turning, looking down, or simply adjusting ear position. Unlike lab tests, these scenarios reveal how people actually listen – and where the experience breaks.
Spatial audio is not just about algorithms. It’s about how those algorithms shape what listeners perceive. So I mapped every technical decision to a perceptual outcome: How does early-reflection modelling change space? How does an EQ curve affect localisation? How does a specific tuning influence sound quality, balance, and performance? How do noise artefacts mask directional cues? How does the sound signature of Sennheiser or Sony models interact with spatial rendering?
This translation layer was crucial. It enabled engineering teams to understand the perceptual impact of their decisions – and gave product teams a shared language to explain value beyond pure technology.
To make the work sustainable, I developed a structured evaluation system – a matrix that connects engineering behaviour, perception, and product strategy. This framework includes:
localisation accuracy
externalisation depth
bass distribution
timbral changes
drift behaviour
interaction with music and gaming content
how well a system handles mismatched formats
where solutions should focus to achieve the biggest perceptual gains
Teams can use this model during development, tuning, QA, and even when selecting demo material or crafting user-facing explanations.
My role went far beyond analysing waveforms. I helped teams decide which features truly matter for clients, what belongs in the budget, and which improvements make the biggest audible difference in users’ everyday lives. This included:
co-designing internal decision trees
advising on messaging strategy for business and OEM partners
identifying opportunities for differentiation
ensuring that perceptual benefits – not just features – drive roadmap priorities
It’s this blend of technical expertise, perceptual understanding, and strategic thinking that allowed the project to move from “good technology” to “great experience.”
A clear, independent evaluation of spatial-audio systems
The result was a comprehensive analysis that finally made it possible to compare spatial-audio implementations across headphones, in-ear monitors, audiophile models like the Focal Clear MG and Focal Utopia, and consumer devices.
This evaluation didn’t just list features – it explained how each system shapes the listener’s perception: how bass is distributed, how noise control affects localisation, how soundstage width changes with movement, and how the sound signature interacts with spatial rendering.
Teams gained a transparent view of what works, what needs fine-tuning, and what truly defines an immersive experience.
From the tests and perception studies, I built a reusable matrix that product, engineering, and UX teams can apply long-term. It describes the perceptual factors that define credible spatial audio, highlights tuning sweet spots, and exposes technical behaviours that matter most for sound quality and user experience.
This framework became a shared language across disciplines – a tool that guides decisions, reduces uncertainty, and aligns teams on what “better” concretely means.
The project led to highly practical recommendations:
tuning strategies for more stable externalisation
EQ adjustments for more balanced timbre
workflow optimisations for greater clarity
UX concepts that help users make sense of what they hear
suggestions for demo content that showcases strengths instead of weaknesses
roadmap pointers that prioritise perceptual value over technical novelty
This ensured that the final product didn’t just measure well – it felt right when people listened.
Beyond immediate improvements, the project helped define how the company should position its spatial-audio technology in a rapidly evolving world.
It became clearer where true differentiation is realistic, how to communicate advantages without overwhelming users, and how to build trust in a category where expectations are high and misunderstandings are common.
This wasn’t a one-off service. It turned into an ongoing collaboration where analysis, tuning, perception, and strategy moved hand in hand.
By combining professional standards with perceptual sensitivity, the work helped the team develop solutions that perform reliably – across music, gaming, film, communication, and everyday audio use.