Inspiration
The inspiration of our model came from the idea of creating a new way to learn through the concept of augmented reality. With the rise of social platforms such as facebook, twitter and youtube, sharing videos and photos have become seamless. And thus because of this, anyone can learn anything. Yet, when it comes to learning concepts that include physical movement such as a particular dance choreography, acrobatic move or fitness workout, it can often be difficult to follow because of the limitations of a 2D video. With our model, mentors and trainers can precisely demonstrate how to perform their intended task and those who follow, can know exactly the position of their different body parts at specific times.
What it does
The program we made with the help of the unity software and Wrnch's motion capture, allowed us to create 3D animations of the individual in any video we upload. For example a video displaying an individual jumping, will be translated onto unity in a 3D manner. Meaning, all sides of the individual can be seen and not just the "front" perspective.
How I built it
Using Wrnch's API, we used their Json file which stored data of the individual's body parts of each time frame and translated it into the language which unity can understand. By creating a script that would link to our unity's model, every time we play the animation, it would call the information from the JSon file.
Challenges I ran into
The majority of our team members have never used unity software, including the language it is used to program with, which is C#. This meant that we were implementing our program as well as learning how to use the softwares instantaneously. This was challenging as we had to dedicate a lot of our time to understand and learn how unity is programmed in order for us to implement the model we wanted. Additionally, once our video was translated into a JSon file through Wrnch's API, using the information of each body part for each time frame was very difficult, since we needed to translate the data from the Json file into the language of Unity in order to create and manipulate a 3D animation of the individual in the video we uploaded.
Accomplishments that I'm proud of
Our main accomplishment would be applying machine learning into a concept which can be used for everyday purposes. This could include as simple as gamers who want specific moves for their characters or fitness gurus teaching their students how to perform a workout (without having to be physically there) which also reduces the chances of injuries from doing the wrong movement.
What I learned
We learnt how to use unity and the language C# to use the data from the Json File. Hence we studied the way scripts are being used in animations and took a step further by creating the animation to mimic the same movement of the video.
What's next for quickcap
To clearly display our model, it would be ideal to have a more precise visual representation of the individual and not just have spheres of their pivot points of our body. This would mean we need to create more 3D objects such as our legs and body (as sticks) and connect each pivot point. In terms of gaming, Mechanin, a widely used animation system, the humanoid animations all have the same skeleton however the challenge is to move these animations. This itself can be expensive or time consuming as artists would have to draw the movements of their characters. With our advanced algorithms, these artists can quickly and cheaply animate the humanoids that they created.
Log in or sign up for Devpost to join the conversation.