Inspiration
At GDC, there was an audio talk from the Battlefield 6 team. Battlefield 6 basically had the best audio engineering and design in the game industry, so going to this talk was exciting. However, something they showed off stuck with us. Their system shows how they debugged and visualized their audio in Unreal Engine 5, making it as realistic as possible and giving us a real experience in a video game. So, can we do that in Unity as well? Well, that's what we did.
What it does
Unity.wav is a runtime audio debugging and spatial-audio simulation tool built in Unity 6 (URP). It layers physics-based occlusion, Doppler displacement, reverb, and low-pass filtering on top of Unity's built-in audio engine, and exposes everything through an in-editor gizmo system and an in-game debug overlay.
How we built it
We built it using Unity's built-in audio pipeline — AudioSource, AudioLowPassFilter, and AudioReverbFilter — and extended it with a custom DebugEmitter component that runs a five-stage spatial audio pipeline every frame, covering listener resolution, distance, Doppler displacement, occlusion, and pitch. For occlusion, two physics rays per wall give entry and exit collision points used to compute wall thickness, driving continuous attenuation and low-pass filtering; for reverb, 10 outward probe rays feed Sabine's formula, and a second round of visibility rays from each wall hit to the listener derives physically grounded reflection delay, reverb delay, and diffusion values from actual room geometry. The debug overlay is a runtime IMGUI table showing distance, occlusion factor, displacement, pitch, RT60 (from Sabine's formula), and LPF cutoff for every active emitter, paired with a scene-view gizmo system, a GL-rendered screen-space direction visualizer, and a minimap using a URP overlay camera stack.
Challenges we ran into
Our main issue came from how we implemented occlusion. Our first occlusion approach fired rays from inside colliders, which Unity doesn't handle cleanly, so we switched to a two-ray system (forward + reverse) that gives proper entry and exit points per wall and produces smooth corner fade-off rather than binary on/off. Timing issues between Unity's script lifecycle and dynamic filter creation caused recurring "Only custom filters can be played" errors, which we solved by destroying pre-existing filters in Awake and adding them lazily only after playback starts. The other spatial audio handling in our pipeline had a few major challenges, but all solutions were inspired by the GDC talk from the Battlefield 6 team.
Accomplishments that we're proud of
We are proud of the system we have created. The implementation of the acoustics was also smooth and, in the end, seemed impressive. Implementing most of the ideas we took from the talk into a project that helps Unity developers is mind-blowing, and we would love to continue diving into this kind of work in the future!
What we learned
We learned that realistic spatial audio is fundamentally a geometry problem — room size, wall thickness, and reflection path length are all the inputs needed to come up with a physically accurate reverb tail if you know how to feed them into Unity's filter parameters. We learned the hard way that "physically based" doesn't really mean "perceptually convincing": our first Sabine implementation was technically correct but produced reverb too quiet to hear. More broadly, we learned that audio debugging and engineering is a really niche but interesting field to get into, and that seeing a system's internal state in real time — occlusion factor, RT60, LPF cutoff, all live — changes how you design and tune it entirely.
What's next for Unity.wav
We will replace the built-in Audio Manager (Unity) system with an industry-standard audio middleware such as Wwise, fix noticeable bugs in the audio system, and improve overall debugger performance through optimization (e.g., killing processes when the player is not interacting with or within range of an emitter).


Log in or sign up for Devpost to join the conversation.