Inspiration

Sign language is a lifeline for those who are deaf. We wanted to bridge the gap posed by a visual language and translation through hacking, thus birthing the Sign Language Translator.

What it does

Our application allows for deaf folks to be able to video call and be heard instantaneously. Our application is a full video-call system where that could take place with our ML program running to translate the sign language into conversation.

How we built it

Our proof-of-concept was made using Python. Our work in progress application is made using Vonage API, Node.js, HTML, and CSS.

Challenges we ran into

Our primary challenge came in integrating our ML models onto the work-in-progress application. This was the impetus for making the proof-of-concept application with Python to demonstrate the efficacy of the work-in-progress application.

Accomplishments that we're proud of

We're really proud of designing the proof-of-concept Python application in the 24 hour window. The proof-of-concept application demonstrates the capabilities of the translator we have been able to develop. Additionally, we're very pleased with the progress made in our full-fledged application by having the video API set and other components such as the homepage, etc.

What we learned

We learned a lot about the Vonage APIs and additionally about the challenges faced by the deaf community. Formulating a solution through hacking was a truly enlightening experience for us.

What's next for Sign Language Translator

Our next steps include fleshing out and fully developing our work-in-progress video calling application which allows for two-way communication. More specifically, we hope to be able to merge both the video calling functionality and our ML model.

Built With

Share this project:

Updates