Inspiration

Mental health issues & being indoors due to the pandemic can lead to picking up yoga for better health, but with improper technique. We want to fix up improper yoga poses so our users can avoid injury, maximize benefits, and keep at their yoga routine.

What it does

A personal instructor AI fixes up improper yoga poses in real-time by detecting and pointing out your incorrectly-angled joints, all through a phone camera. You can pick a specific pose to imitate, or go freestyle where the most familiar pose is automatically detected.

How we built it

We built the UI using Figma. Then, using Android Studio, we integrated this UI and connected the camera to Google’s ML Kit Pose Detection API. We used popular YouTube videos as joint angle standards for proper poses and stored all of this data in Firebase. Leveraging this stored data to compare with real-time camera users’ positions, the mobile app notifies the user when matching joints between the standard and the user are angled too differently.

Challenges we ran into

For some of us, this was our first time building an app from scratch so that was our biggest challenge, especially with such a short timeframe. Android Studio failed to work properly on all of our devices, and integrating the API into the application introduced many obstacles. While the API gives a great framework for applying itself to yoga poses, it’s ultimately up to the programmer to define each pose. Making sure we were defining poses correctly and accurately was a lot tougher than we’d initially thought.

Accomplishments that we're proud of

We were happy about working on a project that could tackle a very relevant issue many of us are facing during this pandemic. Even at its most simple stage, YogAi has the potential to improve the well-being of users by simply encouraging them to stay active on their own. The app helps them get the most benefit from yoga, all without having to leave their home or pay for classes. Finally, getting Google’s Pose Detection API to work with our mobile application, and making the UI completely interactive were our biggest technical accomplishments.

What we learned

How to build a mobile application from the ground up, which included working with Figma to create an aesthetically pleasing UI, integrating that with Android Studio, how to work with ML Kit, and the importance of research and collaboration.

What's next for YogAi

We want to add more poses to the Firebase storage and user settings for a better user experience. Integrating a voice command to give the user some prompts on how to specifically improve each pose would be a specific feature we want to implement that could take YogAi to the next level.

Share this project:

Updates