Inspiration
Spotify's great API seemed like a fun opportunity to build something that we our team could use in our daily lives. After a little research we decided we'd like to use facial recognition to get our mood and play music that matches.
What it does
Currently our app is in two parts, we have a front end that gets the user to authenticate with their own Spotify account, and then asks for a picture to process for emotions.
The second half processes playlists from Spotify to look for playlists that work for different moods and will start playing it for you.
How we built it
We all just started from the ground up, initially a hello world on android (Our first Android app ever). Simultaneously team members got APIs for Spotify and Parallel Dots working for the music and the emotional recognition. Slowly we started working towards a middle ground where everything could work together.
Challenges we ran into
Unfortunately we didn't have time to get everything working harmoniously, but we do have the emotional processing built into the Android app, as well as the front end for Spotify Authentication. We didn't realize it was such a task to get Python code running in an Android environment and attempted to convert everything to Java with limited success given the time remaining.
Accomplishments that we're proud of
We gained a ton of experience dealing with Android Studio, and had fun playing with the different APIs.
What we learned
Android Studio
Python is hard to put on Android
Constant communication and some solid fore-planning go a long way
What's next for SpotiMood
We're going to finish the Spotify code for processing playlists in Java and get these parts all working together.

Log in or sign up for Devpost to join the conversation.