Inspiration

Long-distance driving, especially at night, is inherently risky due to driver fatigue. Traditional solutions often rely on alarms that jolt drivers awake after drowsiness has set in, which can be dangerous. We were inspired to create a proactive system that addresses this critical safety issue. Our goal was to build an intelligent co-pilot that could detect early signs of fatigue and, more importantly, actively engage the driver's mind to prevent drowsiness before it becomes a threat. We envisioned a companion, not just an alarm, for solo drivers

What it does

"ALEx!" – An AI-powered co-driver assistant for long-distance drivers that prevents drowsy driving before it's too late. Using a Raspberry Pi with a camera, it detects early signs of fatigue like yawning or nodding, then engages the driver in active conversation and cognitive activities using Gemini-powered AI. From personalized small talk to brain games, storytelling, and even number plate puzzles, it adapts in real-time to keep drivers alert, reducing the risk of accidents. Think of it as an intelligent, proactive co-pilot for solo night drives.

How we built it

We built ALEx! with a collaboration and distributed the workload among the teammates by assigning tasks according to each individual's skillset. Moreover, we spent the initial 5-6 hours documenting the needs and the tech-stack that was supposed to be used, which made the project's vision clear and helped develop it seamlessly.

Challenges we ran into

We chose the QNX challenge path quite late after discussing and planning the requirements, which resulted in a dramatic change of framework, and we had to change the developed code to make it compatible with the QNX OS instead of Raspbian.

Accomplishments that we're proud of

We had two teammates who worked with AI and APIs for the first time and made the project work. Moreover, it was a new and informative weekend to create a project based on QNX OS. We learnt a lot about embedded systems and their architecture.

What we learned

Building Alex pushed us to learn and integrate several complex technologies. We gained significant experience in:

Real-time Computer Vision on Edge Devices: Implementing fatigue detection (yawning, eye blinking, head nodding) using OpenCV/MediaPipe on a Raspberry Pi required optimizing algorithms for performance on limited hardware.

Conversational AI Design: Crafting engaging, natural conversations with Gemini API for diverse scenarios, from personalized small talk to brain games, was a key learning curve. We explored how to maintain coherence and prevent hallucinations in a dynamic interaction.

Audio UX for Driving Environments: Understanding the challenges of voice input/output in a car, including latency, clarity, and driver responsiveness. We learned the importance of clear, concise prompts and adaptable conversation flows.

System Integration: Bringing together hardware (Raspberry Pi, camera), computer vision, natural language processing, and text-to-speech into a cohesive, responsive system within the hackathon's tight timeframe was a major challenge and learning experience.

Share this project:

Updates