Inspiration
In the realm of life's unexpected challenges, we often find ourselves in a 'limited space' for preparation. Be it the high stakes of a job interview or the butterflies of a first date, these scenarios demand our best without prior rehearsal. Enter VerbalQuest — our innovative platform where users engage in interactive 'battles' within simulated scenarios. Equipped with AI-powered analytics, VerbalQuest offers personalized feedback, turning every interaction into a learning opportunity. As users overcome challenges, they accumulate experience points that reflect their best-demonstrated qualities, enriching their profiles and showcasing their growth. VerbalQuest isn't just preparation; it's the evolution of your social prowess.
What it does
The platform uses machine learning to capture the qualities of social interaction. Using computer vision we capture eye movements, blinking, and facial emotion. Using audio machine learning we analyze speech patterns for expressive tones. Collecting these data allows us to gauge the user's interaction attributes: Attentiveness, confidence, empathy, sociability, and expressiveness. Then we rate them based on the specific context choice. We gamified this experience by creating a scoring system that allows user to gauge their improvement over time, and receive instant gratification for expanding their space to grow. We also use GPT to provide users with customized feedback that aims to help them grow! We allow users to sign up to our platform so they can maintain their progress and consistently grow over time!
How we built it
We built the front end using React, handling auth using Firebase/Firestore, and trained models for speech emotion using Neural Networks/Tensorflow. We also utilized GPT API to generate prompts.
Challenges we ran into
We ran into challenges such as:
- Long ML model training time followed by more training time.
- Difficulties connecting SO many moving pieces in such a short time
Accomplishments that we're proud of
- Exploring an emerging area of machine learning: Speech Emotion Recognition.
- Incorporating multiple applications of computer vision into our project: blink detection, eye tracking, and facial emotion detection.
- Planning and building a project with potential real-world applications!
What we learned
- How to decide on a plan quickly and be biased toward action
- About the challenges and the potential of audio-based machine learning
- How math-heavy computer vision and data processing can be
What's next for VerbalQuest
- We are a hopeful team, and we want to spend time flushing out features and releasing a prototype! Some next steps would be: analyzing text and identifying emotion fluctuation during the session. Some immediate features that come to mind are: fully fleshed out stats system, ability to export/share your profile on social media, and developing a more robust model for speech emotion recogntion.

Log in or sign up for Devpost to join the conversation.