Inspiration
It is a cold, dark, and windy winter night, and you find yourself on an arduous journey: trekking through seemingly endless roads, carrying a massive bag, and bundled up in a heavy winter kit. As an attempt to alleviate the burden of boredom, you decide to listen to music. However, your phone is deep inside a pocket of one of your inner layers, and you realize you'll need to remove your glove to access the touchscreen—making it impossible to search for new music without freezing your fingers. Frustrating, right?
What it does
Introducing Auracle—a revolutionary wearable system that lets you control and even generate music using just your voice and gestures, all without taking your hands out of your pockets.
How we built it
The Glasses: Equipped with a voice recording module, the glasses capture your request—whether it’s “play something calming” or “generate an upbeat lofi track.”
Speech-to-Music AI: The audio prompt is converted to text (via Hugging Face’s model) and passed into a Generative AI Music Model, which crafts a completely unique track based on your mood and request.
The Gloves: Featuring an accelerometer-gyroscope sensor, the gloves let you interact with the music effortlessly:
Tilt Right: Regenerate the music. Tilt Left: Stop the track. Tilt Up: Increase volume. Tilt Down: Decrease volume.
The Experience: The generated music plays through the built-in speaker in the glasses, ensuring a seamless, immersive experience—no phone needed.
Online User Interface/Full Stack App: The user interface is built with streamlit while the backend is handled with Firebase. User data including login information and listening mood is stored on the user interface and the user inputs their voice messages through the website as well which are handled by the application on the website.
Challenges we ran into
Like any ambitious project, Auracle faced obstacles: 1) ESP32 Module Unavailability
We initially planned to use an ESP32 module to transfer data between our recorder and speaker. However, due to hardware constraints, we had to pivot to an ESP32-CAM module, which turned out to be defective. Despite reaching out to tech support, the issue remained unresolved, leading us to simulate sensor data instead.
2) Speaker but No Recorder
While we acquired a speaker module, we couldn’t get a compatible voice recorder in time. As a result, we're currently simulating voice input via a laptop microphone.
3) Stretch Sensor Shortages
Our initial design included stretch sensors for finer gesture detection, but due to stock shortages, we adapted by relying on gyroscope-based gestures instead. Despite these setbacks, Auracle is functional—and more importantly, it lays the groundwork for an entirely new way of interacting with music.
Accomplishments that we're proud of
We're incredibly proud that both the full stack application and machine learning model works well. However, we would like to improve on the application ability to communicate internally within itself and transfer data.
What we learned
We learned adaptability and creativity through this challenge. We experienced a lot of setbacks and were forced to overcome every single one of them.
What's next for Auracle
To properly implement all of the different programs together so they can communicate well.

Log in or sign up for Devpost to join the conversation.