Inspiration

As passionate yoga enthusiasts, we love the practice—but we’ve all experienced the frustration of back pain and muscle strain from improper posture. Small misalignments, like an off-centered hip or an incorrectly placed leg, may seem minor at first but can lead to serious long-term health issues. What if there was a way to correct these mistakes in real-time and prevent injuries before they happen?

That’s exactly what inspired our project—creating a solution that helps yogis refine their form, stay pain-free, and get the most out of every session.

What it does

This web app offers a variety of yoga workouts tailored to different needs—whether it's a gentle morning stretch or an intensive yoga session. Once a workout begins, the app uses real-time pose detection to ensure proper form, making it both an interactive experience and an effective training tool.

The app functions like a game: a progress bar fills up as the user holds each pose correctly. Every position has a 15-second window, and if the user maintains proper form for at least 10 seconds, they earn points. After each pose, the app provides instant feedback—suggesting adjustments for better alignment or offering words of encouragement to keep users motivated. This blend of guided correction and gamification makes yoga practice more engaging, effective, and injury-free.

How we built it

We used React Native with Bootstrap for a responsive interface, BlazePose for real-time pose detection, and the Gemini API to generate personalized feedback.

Challenges we ran into

Pose Detection Accuracy:

One major challenge was achieving accurate pose detection. Initially, we mapped body coordinates to predefined positions, but this approach failed when users were off-center or positioned differently from the training data, causing misclassification even when the form was correct. To solve this, we switched to a relative pose estimation approach, using the hip node as an anchor and calculating joint angles instead of relying on fixed positions. This adaptation allowed the model to dynamically adjust for different body types and positioning.

Additionally, we faced issues with high sensitivity, where the model frequently fluctuated between detecting correct and incorrect poses. To improve stability, we adjusted the confidence threshold and implemented a buffer function that captures an image for Gemini API feedback only when the user maintains the correct pose for a few seconds. These refinements ensured more accurate detection and actionable feedback, preventing unnecessary misclassifications.

Gemini API image input:

During development, we faced challenges feeding images into the Gemini API, as it initially failed to read the base64-encoded images. After further documentation review, we attempted adding the exper prefix, which led to additional issues. Moving forward, we plan to refine our API integration to ensure more reliable image processing.

Accomplishments that we're proud of + What we learned

As first-time hackers with no prior experience in computer vision or the Gemini API, we stepped into this project with no clear expectations. Despite coming up with the idea late, we pushed ourselves to learn, adapt, and collaborate effectively.

Throughout the process, we not only built a functional app but also deepened our technical knowledge in pose estimation, UI/UX design, and API integration. Most importantly, we strengthened our teamwork and problem-solving skills, making this experience both challenging and rewarding

What's next for ZenPose

We plan to enhance ZenPose by refining pose estimation accuracy, expanding support for more exercises, and adding a wider range of poses. Additionally, we aim to improve real-time feedback, optimize the UI/UX for a more seamless experience, and explore integration with wearable devices for enhanced tracking. Future updates will also focus on personalizing recommendations based on user progress and feedback.

Built With

Share this project:

Updates