Inspiration
Alcohol is a substance that reduces brain function, impairs thinking, and reduces coordination. Unfortunately, over 30 people a day die from a drunk driving accident. We believe this can be prevented, and it's our job to try to make an impact. A large number of these accidents also tend to include college students, and we believe that, as college students, it's very important to practice safe habits and protect everyone.
What it does
Using computer vision, speech analysis, and performance metrics, the app gives an estimate for possible impairment and advice on how to remain safe and even recover. We built 3 main tests similar to a roadside test administered by a police officer. First, it tracks the user's eyes when following an object to check for jerks at certain angles and abnormal accelerations. The second test observes the user walking heel-to-toe rating performance based on balance and coordination. The 3rd and final test checks the user's ability to talk and whether they are slurring their words. After this, we categorize numbers based on the accuracy of tests in real life to determine possible impairment.
How we built it
From the front-end side, we coded the UI with React, along with the very popular open source platform Shadcn UI. We styled everything with TailwindCSS to finish off a fast and efficient front-end development process. For our main computer vision components, we used MediaPipe to track users' iris components for their eyes, as well as body parts. For tracking eyes, we took the change of iris position over time for velocity. We then took the standard deviation to account for "jerkiness". For tracking body movement, we took the positions of the heels and toes and measured the distance in order to make sure the procedure was correct. We also tested shoulder alignment and arm distance for balance. All the data was parsed through JSON files using a WebSocket connection for the Python logic.
Challenges we ran into
Our major issues came with trying to track the live JSON data and assess the users for tests 1 and 2. Beyond this, once we had created the necessary Python files, it was challenging to pair these with the React frontend, as the webcam gave us the biggest issues. We had to concisely create the workflow so that eventually all of the Python files and TypeScript files worked together through websockets, FastAPI, and our pipeline for real-time communication between our detection models in Python and the interactive React interface.
Accomplishments that we're proud of
We are very proud of tracking users' bodies and dealing with many different test cases, such as not allowing the act of stepping to be a mistake on the user's part.
What we learned
Firstly, we had to learn how to effectively use git and github. We were very new to it, so it was hard to get used to, but it is a very great tool especially when you're working with teams. Next, we learned media pipe, a google library that tracks body parts and joints. It was very effective in our project, and we see its use cases outside of our sobriety test. We learned also that we cannot fully rely on AI. We have to be in control through every step of the way. Overall, it was a great experience, and we loved connecting with everyone here.
What's next for CognitiveAI
This was a fun and insightful project for our team. We want to refine Cognitive AI by fixing up the UI, and also refining how closely our joints are tracked, so that our accuracy can be as high as possible. We want to test this in the real world.

Log in or sign up for Devpost to join the conversation.