In the above video, the user of the webapp did not converse back with the caller due to ambient noise. However, our web app is capable of bi-directional conversation.
Why we qualify for Twilio, Cohere, and Domain.com sponsor awards.
We qualify for the Twilio sponsor award because we got the phone number from Twilio, we utilized TwiML extensively such as Stream to extract the audio of the caller. In addition, we also used Twilio to be able to speak to the caller from the browser. We used Cohere for sentiment analysis of the text that we received from Twilio. We qualify for the Domain.com sponsor award because we registered the listenlink.tech domain. Unfortunately, we were unable to get the domain verified and map it back to google cloud.
Inspiration
Hundreds of people call suicide hotlines every day, and the professionals who answer these calls have an important job to do. When making this application, we wanted to lighten their load and make their job easier in any way we could. We wanted to give the professional as much information as they needed to understand exactly what the caller needs to hear.
What it does
This project assists people who answer suicide hotlines to identify the emotions of the caller based on their words. This acts a supplement or a training assistant for these professionals.
How we built it
We built the frontend by using React and built the backend using Flask. We also used Twilio for creating our custom hotline number (434 404 6284). Within the Flask app, we used the Python Vosk library to transcribe the audio files from the Twilio Stream into text. We hosted the frontend website on Google Cloud Platform and the backend on Heroku. This was by choice because we used two different languages for both. We used Cohere for the natural language processing (NLP) algorithm to return classified predictions of the caller's emotion.
We deployed the react app with Docker + Google Cloud Build and Cloud Run. We deployed the Flask app with gunicorn on Heroku.
Challenges we ran into
One of the main challenges we ran into was the size of our dataset we used to train our machine learning algorithm. We had limited access to reference information because hotlines are not publicly recorded and people do not study the amount of depression in any given phrase someone could say. We ultimately decided to generate our own dataset. As a result, the predictions that the model made were sometimes inaccurate. We were also encountering latency problems when receiving a response from our model in real time.
Accomplishments that we're proud of
Given that most of our team had never used these tools before, we are proud that we could get this entire system up and running, Google cloud was a particularly new experience for us and we are so grateful to have had the opportunity to learn how to use this cloud service.
What we learned
It was our first time working on an AI project so we learned how to use Cohere to create our model and to use the Cohere API to access our model through code. We also learned how to use Twilio, Google Cloud, and Flask. In addition, we also learned how Vosk can be used as a text-to-speech service with little overhead. Finally, we also learned how to utilize Python Websockets for continuous communication between the Twilio webserver and our Flask app.
What's next for ListenLink
We would expand on our dataset to include more emotions, and have much more training data to improve the prediction's accuracy. We hope to combine the frontend and backend deployment into one deployment site so that we do not experience a latency issue. We also hope to get the domain listenlink.tech to work in the future as well.
The application could be scaled to a full-fledged organizational hotline service. It could include features such as call notes, caller tracking (their call history and how their emotional tendencies were).
Log in or sign up for Devpost to join the conversation.