π§ Co-sounds
π± Inspiration
Stress is something we can all relate to, and music is a universal way to relax.
Dr. Michael Frishkopfβs Mindful Listening Spaces at the Cameron Library aimed to bring students together through shared ambient soundscapes. However, participation remained low β students rarely interacted with the system, limiting its ability to adapt to collective preferences.
Our team was inspired to solve this by making interaction seamless, non-intrusive, and meaningful. We asked ourselves:
- How can we get students to participate effortlessly?
- Can we identify users without forcing sign-ups?
- How can the system stay ethical and preserve privacy?
Co-sounds is our answer β a blend of AI, sound, and interaction design that lets students co-create adaptive, mindful soundscapes together.
πΆ What It Does
Co-sounds transforms passive listening into a collaborative, responsive experience.
Students simply tap their phones on an NFC tag to:
- Submit quick preferences or votes on the current soundscape
- Provide feedback on relaxation and focus levels
- Seamlessly contribute to a collective mood model
The system uses this data to generate adaptive soundscapes that reflect both individual and group preferences, helping students relax and connect in shared spaces.
ποΈ How We Built It
Architecture
Co-sounds consists of three integrated components:
1. π Web Application
- React-based responsive interface
- Real-time voting and feedback system
- NFC tag support for tap-based interaction
- Supabase authentication and data storage
- Music preference surveys and user settings
- Vote confirmation animations and progress indicators
2. π₯οΈ Backend Server
- Express.js REST API
- Secure integration with Supabase
- JWT authentication and API key protection
- Real-time session management for collective soundscapes
3. π§ Machine Learning Model
- Built with a Linear Ridge Regression classifier
- Trained on the ESC-50 dataset (Environmental Sound Classification)
- Generates audio feature embeddings used to match user preferences to songs
- Produces both individual and collective recommendation vectors
βοΈ Challenges We Ran Into
- Designing an interaction flow that was low-effort but engaging
- Balancing anonymity with persistent user identification
- Training a sound classification model from raw audio using mathematical feature extraction and regression techniques
- Integrating physical NFC inputs with digital web services
- Ensuring reliable real-time feedback loops between frontend, backend, and ML model
π Accomplishments That We're Proud Of
- Successfully built a working prototype that connects NFC inputs to an adaptive ML pipeline
- Developed a linear ridge regression model that classifies soundscapes using ESC-50 data
- Created a learning algorithm that evolves based on user feedback and collective trends
π‘ What We Learned
- The power of user-centered design in encouraging participation
- How to bridge physical interactions (NFC) with cloud-based AI systems
- The importance of ethical data collection and minimizing intrusiveness
- How small design choices (like frictionless taps) can dramatically increase engagement
π What's Next for Co-sounds
- Deploying Co-sounds in the Cameron Library Mindful Listening Space for pilot testing
- Expanding the ML system to learn from emotion recognition
- Building a mobile app companion for personalized profiles and real-time analytics
- Introducing new sound categories and generative audio synthesis for richer ambient experiences
Built With
- express.js
- fastapi
- librosa
- nfc
- numpy
- python
- react
- scikit-learn
- spotify
- supabase
Log in or sign up for Devpost to join the conversation.