🎧 Co-sounds

🌱 Inspiration

Stress is something we can all relate to, and music is a universal way to relax.
Dr. Michael Frishkopf’s Mindful Listening Spaces at the Cameron Library aimed to bring students together through shared ambient soundscapes. However, participation remained low β€” students rarely interacted with the system, limiting its ability to adapt to collective preferences.

Our team was inspired to solve this by making interaction seamless, non-intrusive, and meaningful. We asked ourselves:

  • How can we get students to participate effortlessly?
  • Can we identify users without forcing sign-ups?
  • How can the system stay ethical and preserve privacy?

Co-sounds is our answer β€” a blend of AI, sound, and interaction design that lets students co-create adaptive, mindful soundscapes together.


🎢 What It Does

Co-sounds transforms passive listening into a collaborative, responsive experience.
Students simply tap their phones on an NFC tag to:

  • Submit quick preferences or votes on the current soundscape
  • Provide feedback on relaxation and focus levels
  • Seamlessly contribute to a collective mood model

The system uses this data to generate adaptive soundscapes that reflect both individual and group preferences, helping students relax and connect in shared spaces.


πŸ—οΈ How We Built It

Architecture

Co-sounds consists of three integrated components:

1. 🌐 Web Application

  • React-based responsive interface
  • Real-time voting and feedback system
  • NFC tag support for tap-based interaction
  • Supabase authentication and data storage
  • Music preference surveys and user settings
  • Vote confirmation animations and progress indicators

2. πŸ–₯️ Backend Server

  • Express.js REST API
  • Secure integration with Supabase
  • JWT authentication and API key protection
  • Real-time session management for collective soundscapes

3. 🧠 Machine Learning Model

  • Built with a Linear Ridge Regression classifier
  • Trained on the ESC-50 dataset (Environmental Sound Classification)
  • Generates audio feature embeddings used to match user preferences to songs
  • Produces both individual and collective recommendation vectors

βš™οΈ Challenges We Ran Into

  • Designing an interaction flow that was low-effort but engaging
  • Balancing anonymity with persistent user identification
  • Training a sound classification model from raw audio using mathematical feature extraction and regression techniques
  • Integrating physical NFC inputs with digital web services
  • Ensuring reliable real-time feedback loops between frontend, backend, and ML model

πŸ… Accomplishments That We're Proud Of

  • Successfully built a working prototype that connects NFC inputs to an adaptive ML pipeline
  • Developed a linear ridge regression model that classifies soundscapes using ESC-50 data
  • Created a learning algorithm that evolves based on user feedback and collective trends

πŸ’‘ What We Learned

  • The power of user-centered design in encouraging participation
  • How to bridge physical interactions (NFC) with cloud-based AI systems
  • The importance of ethical data collection and minimizing intrusiveness
  • How small design choices (like frictionless taps) can dramatically increase engagement

πŸš€ What's Next for Co-sounds

  • Deploying Co-sounds in the Cameron Library Mindful Listening Space for pilot testing
  • Expanding the ML system to learn from emotion recognition
  • Building a mobile app companion for personalized profiles and real-time analytics
  • Introducing new sound categories and generative audio synthesis for richer ambient experiences

Built With

Share this project:

Updates