Entertainment Category
Inspiration
MelodyMind was inspired by the universal language of music and its profound ability to connect with human emotions. We sought to create an interactive experience that helps users explore their feelings through personalized music recommendations, allowing them to find solace, motivation, or joy in melodies tailored to their emotional state.
What it does
Our platform detects user emotions based on the qualities of speech and tone. It provides tailored music recommendations that align with their emotional landscape. Using Hume AI, the application is able to capture up to 48 emotions for each word spoken, ranking the top emotions by their percentages. This data is then used to create a more personalized listening experience.
How we built it
We developed MelodyMind by integrating Hume AI to analyze vocal input and identify emotional cues. The captured data is stored in a structured format, maintaining a record of each message along with the top four emotions and their accuracies. As the conversation progresses, we keep track of the emotion percentages, ultimately capturing the top four emotions at the end. We then map these emotions to corresponding colors for visual representation. Finally, we utilize the Spotify API to suggest tracks that resonate with the user's identified mood, allowing users to connect their emotions to music.
Challenges we ran into
One of the significant challenges we faced was integrating Hume AI was a significant challenge, as it returns 48 emotions for each phrase or word spoken. We had to develop a robust algorithm to parse this data, store the emotions along with their accuracies, and accurately calculate and retrieve the top four emotions. Another hurdle was learning Next.js while integrating the Spotify API to make targeted requests based on user emotions. On the frontend, we encountered difficulties with integrating Three.js to enhance the application's visual appeal and interactivity, which added complexity to our development process.
Accomplishments that we're proud of
We successfully created a prototype that can accurately identify emotions from vocal tones using Hume AI, which has proven effective in ranking emotions based on their respective percentages. The user interface presents good intuitiveness, design, and responsiveness. We're excited to demonstrate the smooth functionality of our project.
What we learned
Throughout the development process, we learned the importance of user feedback in refining our algorithms and user interface. Collaborating as a team helped us identify strengths and weaknesses in our approach, leading to innovative solutions. We also gained insights into the complexities of emotion recognition and the significance of creating a secure and user-friendly experience.
What's next for MelodyMind
Moving forward, we plan to enhance the emotion detection by incorporating more diverse datasets to improve accuracy. We aim to expand our music library and collaborate with artists to provide exclusive content. Additionally, we want to explore integrating MelodyMind with wearable technology to offer real-time emotional feedback and music suggestions, creating an even more immersive experience for our users. Also for future improvements we would like to further integrate with Spotify to be able to save songs to your account and an embedded browser player.
Built With
- api
- css
- html
- hume
- javascript
- next.js
- react.js
- spotify

Log in or sign up for Devpost to join the conversation.