Inspiration

We wanted to create an application that helped nonverbal children or children with autism communicate their emotions through music. It acts as a kind of music therapy and helps us show our support and solidarity to children who want to express themselves but cannot.

What it does

Emomi is an application that helps nonverbal children express their emotions through music. We chose the piano to represent specific emotions and a blob-like friend shows up to help children specify the emotion. Every time a child clicks on a note/emotion, their input is stored under the hood so that parents can track their child's emotions. There's also a speech-to-text feature where the child is able to verbalize their emotion through limited sentences and is easily visualized for parents to see.

How we built it

We built our UI using vanilla JS, HTML, and CSS in order to create our UI, flask, and python for the skeleton of the app and Keras and TensorFlow for the text-to-speech feature.

Challenges we ran into

We had never used TensorFlow and Keras before to create both a sentiment analysis model and a text-to-speech model for our app. It took a lot time and trial and error

Accomplishments that we're proud of

We're proud of making the music and illustration ourselves

What we learned

How to convert speech-to-text and make programming enjoyable while making a good impact to others.

What's next for emomi

Better ui and mobile responsiveness.

Built With

Share this project:

Updates