Inspiration
Our inspiration for Queue-1-1 (Q11) came from witnessing the immense strain on small-town 911 call centers, particularly during high-traffic emergencies. Long wait times for critical calls in smaller communities can be life-threatening, and we wanted to create a system to help prioritize those calls and optimize response times. At the same time, we realized that large cities also suffer from similar issues, inspiring us to find a solution.
What it does
Queue-1-1 is an intelligent triage system designed to assist 911 responders by prioritizing emergency calls during periods of high volume. By leveraging machine learning and AI, our system analyzes live call recordings to extract key information about the emergency and assigns a priority score. Providing a priority score allows dispatchers to focus on critical situations during heavy traffic. Q11 also integrates cloud telephony to play automated prompts, record calls, and notify callers that their information has been captured and will be addressed as soon as possible. We also wanted the 911 operator to be able to view all the essential variables of the incoming calls on a straightforward interface. Thus, we created the dashboard for Queue11, which displays an organized triage system of calls. The system displays this information on a dashboard that includes the time of the call, address, caller details, and a brief description of the emergency. It also groups related incidents during crises, making it easier for operators to manage high volumes of calls.
How we built it
We first implement Plivo, a leading cloud telephony provider, to play an automated message to callers, prompting them to provide their name, address, and description of their emergency. Plivo then recorded the call, and we used OpenAI's Whisper API to transcribe the audio. After getting a text transcription of the call, we condensed the transcription to extract key information using OpenAI's GPT-3.5 Turbo.
This text transcription was run through a machine learning model that predicted the priority level based on the condensed description. This involved extensive data preparation and feature engineering. We used Pandas for data manipulation, the TF-IDF vectorizer from Scikit-learn to convert the call descriptions into numerical features that the machine learning model uses, and the Random Forest Classifier was then used for classification. This model works by creating multiple decision trees that each classify the call based on subsets of the data. This processed information was then securely stored in an Amazon Web Services S3 bucket, making it readily accessible to emergency personnel. We then used Plivo again to make a call-back to our callers, informing them that their data was successfully captured and that they would receive a call-back once an operator was available.
We used JavaScript, HTML, and CSS as the foundation of our website, and Figma was used to draft the prototype. Our code employs JavaScript arrays to store and display caller information, including name, address, and time of the call, for the 911 dispatcher. We also implemented custom JavaScript functions to detect and group duplicate calls about the same incident, improving the system's efficiency.
Challenges we ran into
One of the significant challenges we faced was working with various voice providers. We had difficulties with poor recording quality and integrating different services like Plivo and AWS. We spent hours and hours attempting various voice providers but ran into multiple obstacles, such as low recording quality and paywalls. This led us to delve into the specifics of their APIs to understand how they worked and how to make them perform better. Another challenge was optimizing the machine learning model to handle unstructured emergency data and ensure it provided accurate call prioritization.
Accomplishments that we're proud of
We're incredibly proud of integrating various complex technologies into one cohesive system. Creating all the puzzle pieces was one challenge, but having a grounded framework that consistently works was a significant accomplishment. Initially, we worked with 2.9 million call entries, but with SMOTE (Synthetic Minority Over-sampling Technique), we expanded this to nearly 7 million, allowing for more accurate training of our model. As a result, our model achieved an impressive accuracy rate of 93.71%.
What we learned
This project gave us a deep understanding of telephony APIs, specifically how to integrate and troubleshoot Plivo with Open AI. We also learned how to handle and preprocess large datasets for machine learning, specifically with imbalanced emergency call data. Our team honed our skills in both front-end and back-end development, and we strengthened our ability to work under pressure and adapt to technical challenges.
What's next for Queue-1-1
We plan to expand Queue-1-1's functionality by incorporating more advanced speech recognition features and integrating location-based services to identify nearby emergency responders better. We also aim to improve the machine learning model's ability to prioritize calls based on historical data trends and explore the potential to add datasets across North America. Additionally, we would like to implement more features that could be used in different crises. For instance, we could integrate the number of available resources, like fire trucks, ambulances, etc., to Q11 to allow for other approaches on how some calls may not have resources available to respond to the emergency.
Built With
- amazon-web-services
- css
- figma
- gpt-3.5-turbo
- html
- javascript
- joblib
- machine-learning
- openai
- pandas
- pip
- plivo
- python
- randomforestclassifier
- scikit-learn
- smote
- td-idf-vectorizer
- vercel
- whisper-api




Log in or sign up for Devpost to join the conversation.