We will be undergoing planned maintenance on January 16th, 2026 at 1:00pm UTC. Please make sure to save your work.

Inspiration

Auto EQ was inspired by a recurring problem in everyday music listening: audio quality varies significantly across songs, albums, and genres, yet most desktop music players either lack an equalizer or provide one that does not actually process the audio signal. Professional audio tools solve this problem well, but they are designed for producers, not listeners, and come with steep learning curves.

The idea behind Auto EQ was to bridge this gap. We wanted to build a system that delivers professional-grade audio enhancement while remaining simple, intuitive, and accessible. The guiding belief was that software should adapt to the music and the listener, rather than forcing users to manually tweak complex settings.


What it does

Auto EQ is a desktop audio player with a real-time 10-band parametric equalizer that directly processes the audio stream. Unlike visual-only equalizers, Auto EQ applies true digital signal processing to modify the sound in real time.

In addition to manual control, Auto EQ includes an intelligent “Auto Analyze” feature. This feature analyzes the frequency spectrum of a song and automatically generates a balanced EQ profile. Users can instantly hear improved clarity, bass balance, and overall tonal consistency without any prior audio engineering knowledge.


How I built it

Auto EQ was built using a hybrid architecture that separates the user interface from the audio processing engine.

The frontend is implemented using Flutter, providing a modern, responsive desktop interface with smooth animations, drag-and-drop playlists, and intuitive EQ controls. The backend is written in Python and handles all audio decoding, analysis, and real-time processing.

The audio engine applies cascaded biquad filters for each EQ band. Each band is modeled as a second-order IIR filter with the transfer function:

[ H(z) = \frac{b_0 + b_1 z^{-1} + b_2 z^{-2}}{a_0 + a_1 z^{-1} + a_2 z^{-2}} ]

Multiple filters are chained together to achieve smooth and precise control over the full audible frequency range. The Auto Analyze feature uses a Short-Time Fourier Transform (STFT) to measure frequency-band energy and generate EQ gains, followed by smoothing to ensure natural-sounding results.

The frontend and backend communicate locally through a REST-based interface, allowing real-time interaction without blocking the UI.


Challenges I ran into

One major challenge was achieving low-latency, glitch-free audio playback. Real-time audio processing is sensitive to delays, and even small inefficiencies can introduce audible artifacts. This required careful optimization of numerical operations and strict control over buffer sizes.

Another challenge was filter stability. Early versions produced distortion due to incorrect coefficient calculations and abrupt parameter changes. These issues were resolved by clamping gain values, smoothing transitions, and thoroughly testing the filters across a wide range of inputs.

Synchronizing rapid UI interactions with the backend was also challenging. Frequent slider updates could overwhelm the audio engine, so debouncing and state synchronization mechanisms were introduced.


Accomplishments that I am proud of

We are proud of delivering a fully functional desktop application within the hackathon timeframe. Auto EQ is not a mockup or a visual demo; it performs real-time audio processing using proper DSP techniques.

Key accomplishments include:

  • A true 10-band parametric equalizer with real-time audio processing
  • An automatic EQ generation system based on spectral analysis
  • A modern, responsive desktop UI
  • A clean, modular architecture that supports future expansion

What I learned

Through this project, we gained hands-on experience with digital signal processing concepts such as STFT, frequency-domain analysis, and biquad filter design. We also learned how to architect real-time systems that balance performance, correctness, and user experience.

Beyond technical skills, the project reinforced the importance of product thinking—designing features that are powerful but approachable, and focusing on how users actually interact with complex systems.


What's next for Auto EQ

Future plans for Auto EQ include cross-platform support, advanced audio effects, personalized listener profiles, and machine-learning–based EQ recommendations. Additional features such as hearing calibration, room correction, and streaming service integration are also planned.

The current architecture was intentionally designed to support these enhancements, making Auto EQ a strong foundation for continued development beyond the hackathon.

Built With

Share this project:

Updates