Inspiration

We thought that since VR is able to connect people digitally online, we wanted to make it more inclusive by creating an interface that translates sign language into voice/text. This would allow people who are mute to be able to communicate using hand signs in a VR environment. In the future, we intend to develop this tool to translate in real time and be able to interpret more complicated hand signs.

What it does

Once the user connects to the headset, the user can screenshot what they see and then be able to speak into the system to prompt a generative model to interpret what the screenshot means. However, we do intend to integrate our own classification model into the Meta Quest to classify hand signs in real time.

How we built it

Firstly, we used the image capture+AI toolkit. we then initiated an image capture and fed the image through the ai image toolkit. In response, the AI toolkit will give a feedback on which letter of the sign language you are displaying and also give examples on how u can improve. For the image classifier, we used tensorflow to create the model, training on a dataset of 3000 entries with 500 per output class, ensuring the accuracy of classifying hand signs from A-F.

Challenges we ran into

One of the first challenges we had was that none of our team members had experience building on Unity. Therefore we had to start from scratch learning over the past week. Secondly, there was a logistical issue where there were not enough headsets for all the teams and therefore for the majority of the first 2 days we were unable to test our product. Thankfully the other hackers were helpful and lent us their headsets periodically. The XR bootcamp team also pulled though and helped us procure more headsets and allowed us to bring them home to continue to work on our project. Last and most importantly, it was tough for us to integrate our video capture and classification interface into the XR platform. Therefore in the interest of time, we had to improvise and settled with an image capture toolkit. Furthermore, there are some limitations when the hand signs are similar, in the case of and E.

Accomplishments that we're proud of

We are proud that we were able to pick up unity within the limited time we had and able to push out a working demo. We are also extremely grateful for each other as teammates as we supported each other thoughout the past week, overcoming the various callenges.

What we learned

We learnt how to build a simple scenario in unity. Also we learnt how to build a local classification model, training it with our own generated data and achieving up to 99.5% accuracy.

What's next for Handycaptain

Moving forward, we intend to continue working on our product, hoping to successfully integrate our classification model to be able to identify hand signals in real time in XR. This is the model we are intending to incorporate with our application. This would help spread awareness and popularity of usage of sign language. Ideally, we also intend to improve the UI/UX to ensure a more comprehensive experience.

Built With

Share this project:

Updates