Inspiration

Dance is an integral and valuable human experience we all witness around the world. And to many, a rather joyful and social experience that allows them to socialize, compete, or even simply express themselves. However, an extremely common problem most people face while attempting to dance, especially in a public setting, is a lack of dance skills and a fear of embarrassment. To address this, we've created an app that allows users to learn how to dance in the comfort of their own home with the help of an asynchronous instructor. Our software provides a judgment-free environment where anyone can develop their dance skills and achieve their goal of becoming a confident dancer. We believe that dancing is a skill that should be accessible to everyone, and we hope our service will provide a simple, effective avenue for users to take the first step in realizing their dancing potential.

What it does

The application utilizes advanced techniques in machine learning and computer vision to guide the user through a comprehensive warm-up and dance routine. It leverages the power of the OpenCV2 library and MediaPipe framework to perform real-time analysis of the user's posture and form. The warm-up component of the app employs a collection of yoga poses to generate a set of computerized nodes that represent the optimal stretching positions.

Using computer vision algorithms, the app analyzes images of the user's posture and superimposes the computerized node model over the user's image to identify deviations from the optimal form. This comparison is based on the calculation of angles between nodes, ensuring that the app can provide personalized feedback regardless of the user's background, height, or build.

The processed results are stored as JSON objects on Firebase and are processed using Python. This enables the app to perform real-time analysis of the user's posture, making it an effective tool for correcting form and promoting proper stretching and posture.

The dance routine itself is a far more immersive experience. For starters, working on sprucing up their steps or learning a new routine. They can access our library of tutorials or even upload dance videos they want to learn. The user is shown a routine and then asked to perform it to the best of their understanding. Our program takes the user movement nodes and compares them to that of the instructor, and identifies the specific timestamps where the deviation is significant. These timestamps are displayed to the user so they can see exactly where they need to improve and how. The user also gets an accuracy score metric that shows how similar their performance was to the tutorial. The user can go through this process until they reach a satisfactory score and can rest easy knowing they have made concrete improvements in their passion.

The app also offers a valuable tool for dance teams, providing a structured and efficient way for team members to practice and improve their skills. Dance teams can assign specific modules for each member to complete, setting a minimum accuracy score that the dancer needs to achieve.

This is particularly useful in situations where a dancer may have missed a practice session or the team wants to ensure that everyone is familiar with the routine before the next practice. The module presents the dance routine and captures the user's performance through movement nodes. After the user has completed the routine, the app provides a detailed breakdown of any areas where the dancer may have struggled. The user can then continuously practice until they reach satisfactory accuracy values, becoming increasingly familiar with the process with each repetition.

This approach allows dance teams to ensure that all members are on similar skill levels and can practice from the comfort of their own homes. The app's use of advanced machine learning and computer vision technologies streamlines the process of improving dance skills, making it an essential tool for dance teams seeking to maximize their performance and success.

How we built it

In order to achieve our Individual Training feature, we first take a tutorial video. We process and break down the video using OpenCV's features into a series of images (frames). Next, we use Google's Machine Learning-based MediaPipe Solutions to trace and track a set of 32 nodes on various points of the person in the tutorial. We then run various scripts and mathematical relationships to find the relative distance and relative angles between these nodes and save them in a 3D list.

Afterward, we have the user input a recording of their attempt at the dance step/routine and perform a similar analysis on their input. Then, we map the nodes of each frame from the first video to the nodes of each frame on the second video using a variation of the Dynamic Time Wrapping algorithm called Fast Dynamic Time Wrapping. This results in time-based index matching for each frame from the first video to the second video based on the position of the person and the dance move being performed.

Finally, we then assess the percent error between the tutorial input and user input to output various useful quantitative and qualitative metrics to help the user get better at mastering the move. For example, we provide an overall score labeled "Dance Score" that lets users know how accurate/close their performance was to the tutorial. Next, we also provide an array of timestamps that show exactly where the user deviated from the tutorial by a certain degree, so users can identify exactly where they went off and learn to fix their mistakes. And finally, we also output a video of the user dancing with a superimposed overlay of the nodes from the tutorial, so they can see, along with a list of time-stamps, how and why they were off (in what direction, by what speed, by what angle, and so forth).

Challenges we ran into

This was our team's first experience with OpenCV and Google MediaPipe, so learning the capabilities of these machine-learning based computer vision software took a long time. We also had several issues where our environment stopped responding, and solving these issues took a lot of precious time away from us.

Accomplishments that we're proud of

Getting a working program that utilizes such complex software is definitely a highlight in every team member's hacking careers, especially since this is a majority of our team's first time hack! Using mediapipe to measure angles to calculate accuracy was another proud accomplishment of ours.

What we learned

We learned how to use Google MediaPipe, OpenCV, and Streamlit. We also learned how to use Miniconda, a virtual environments that support packages that wouldn't be supported otherwise (tensor flow/machine learning).

What's next for DanceVision

DanceVision's next steps are to launch the program to a consumer accessible medium such as an app or website. While going through this process we hope to apply and participate in the various start-up resources tech has to offer including the Create-X incubator at Georgia Tech. While we develop our app for the market, we hope to add multiple libraries into our app so users can learn a variety of various dance routines from an ever larger variety of genres.

After and while we launch, we hope to engage in social media marketing and customer engagement to make DanceVision both a household name, and the go-to software for learning how to dance and managing asynchronous learning for dance teams.

Built With

Share this project:

Updates