Inspiration We were inspired by the rising mental health challenges globally, especially among youth and professionals facing chronic stress. Suicide rates are climbing, and while therapy and medication help, we believe technology can offer more accessible, real-time support. Our project bridges neuroscience and music to create an environment where users can better understand and manage their mental state using EEG data.
What It Does Our app captures real-time EEG signals from the user and analyzes brainwave frequencies—specifically alpha, beta, delta, theta, and gamma waves. These waves correlate to different mental states such as stress, relaxation, or deep focus. Using a pre-labeled dataset, we identify specific stress types based on brainwave imbalances. Once a mental state is detected, our system selects and plays music—curated by genre and tempo (BPM)—to help reduce stress and improve well-being.
Features include: EEG device connection and signal quality monitoring Real-time mental state classification Stress type detection (e.g., cognitive, emotional) Music therapy suggestions tailored to your brain activity History tracking of mental states over time
How We Built It We used a combination of modern frontend and backend technologies: Frontend: Vite + React with Tailwind CSS for a responsive, clean UI Backend: FastAPI to handle EEG data processing and mental state classification Languages: Python (EEG parsing, FastAPI logic), TypeScript (frontend logic) EEG Data Processing APIs: Integrated Spotify API for music control, and used Gemini API for stress-type classification via contextual prompt engineering
Challenges We Ran Into EEG data inconsistency: Most EEG datasets are stored in .edf format, and vary widely in structure and quality. We had to normalize input and calibrate baseline brainwave levels. API key management: Handling multiple third-party APIs (Spotify, Gemini) required secure key storage, rate limiting, and refresh token logic. Real-time performance: Ensuring fast response between brainwave detection and music playback was crucial to the user experience.
Accomplishments We're Proud Of Successfully integrated Spotify to respond dynamically to brainwave data.
Built a fully functioning EEG analysis pipeline from raw data to visual UI.
Developed a visually clean and user-friendly website using modern tools.
Identified and categorized stress types based on real brainwave patterns.
What We Learned Working with real EEG data is significantly messier than we expected—handling noise, device errors, and user variability was a major learning curve. Music therapy isn't just subjective—there are quantifiable correlations between BPM and stress relief across different stress types. Balancing backend performance with a real-time frontend is challenging, especially when streaming data and interacting with multiple APIs.
What's Next for Symph?
Expand EEG hardware support (e.g., Muse, Emotiv) and mobile app integration. Improve AI-based stress detection with more training on diverse EEG datasets. Allow user personalization of music preferences within therapeutic recommendations. Explore partnerships with mental health platforms and schools for broader access.
Log in or sign up for Devpost to join the conversation.