Inspiration
Not everybody gets access to mental healthcare. It could be, more often than not, because of the stigma around it. Without having anybody to open up, most people turn to writing in a journal. A log that contains their everyday experiences, feelings, and thoughts. For HackSC 2020, we asked ourselves the question, "What if the mental healthcare provider had access data extracted from these excerpts?" It would make the practice of healthcare so much more effective by giving therapists a door into the truest and most genuine thoughts of a person. This idea gave birth to sentiment.io.
What it does
sentiment.io is a remarkable application that accomplishes and simplifies two tasks necessary for anybody who wishes to open up about mental health. It creates a chronologically ordered log for the user that is easily accessible to read thereby creating a "virtual journal" that the user can use seamlessly and it allows a therapist to monitor the general emotional wellbeing of the user through data from these logs. sentiment.io uses Natural Language Processing to analyze the mood (in our project, the mood is classified between positive, negative, and neutral) and creates a plot that relates the happiness index of the user to time. The therapist can access this plot to get an understanding of the range of feelings the user experiences on a daily basis. We understand that the user's logs themselves are highly personal, and so are never share them with the therapist.
To make it easier for the user to enter their log, we allow them to enter their logs by typing writing them out. However, alternatively, the user can also take a picture of a written log if they are more comfortable with that, converting the handwritten text into a digital format for storage. We implement this through the Google Cloud Vision API that can transcribe written text to typed text with a family high certainty.
How we built it
We used the programming language Swift and the IDE Xcode to create an iOS app. We also used CocoaPods to download 5 different APIs we used in our project: iOS-Charts to graph our data from the NLP model, a keyboard manager and Firebase Authentication, Firebase Firestore and Firebase ML. We maximized the use of Object-Oriented Programming to refactor our code into an optimum format.
We implemented Google Cloud's Firebase authentication services to securely log in and register users and therapists. After the user logs in, they are directed to a screen where they can either view older logs or add a new one. The logs are stored on Google Cloud's Firebase Firestore database and are retrieved and displayed to the user in chronological order. When the user chooses to add a new log, they can either type it in or take a picture of a written log, which is where we use Firebase's MLKit and AI Vision API to detect and extract text from the image and input it as a log. Finally, the user can choose to view a plot of their average emotions over their previous posts to gain an understanding of how their emotions vary over time. This is where we implement our self-trained natural language processing model, which parses the logs and assigns a numerical value between -1 and 1 for how positive or negative the user is. The therapist can also view only this information if they know the username of the user.
Challenges we ran into
One of the major challenges was learning how to set up the proper environment to start the project. Since this was the first time we were coding for Machine Learning, this took us longer than expected. Moreover, understanding which software to use for our NLP model was a hurdle in itself. We were able to train our NLP ML model using only a few thousand training data which were pretty basic and unsophisticated. However, despite thees odds, we were able to reach a 77% accuracy for our model.
Accomplishments
Seeing our project bloom into what it is now was by far the most rewarding of all. We are proud to have successfully been able to dip our feet in ML and AI not just through Google Cloud, but also by designing our very own ML Model without having any prior experience in this field. Shows how determination and grit can trump experience in certain cases!
What's next for sentiments.io
While we were able to learn and implement a lot of features in the 36-hour time span of the hackathon, there are many improvements possible for sentiment.io. The most prominent improvement would be to implement a tone analyzer, such as provided by IBM's Watson Tone Analyzer that can tell which emotion the user is going through, that too with a reasonably high amount of accuracy. Another improvement would be ensuring encryption of the data provided by the user and ensuring that the therapist is not able to misuse the provided information (such as by denying taking a screenshot). Another expansion could be adding sign language detection technology, such as by using Google's Cloud Vision for Video Analysis or designing our own ML Object Recognition model so that our disabled users can enter in their logs in a way that is most natural to them.
Built With
- cocoapods
- firebase
- firestore
- google-cloud-vision
- natural-language-processing
- swift
- xcode
Log in or sign up for Devpost to join the conversation.