Inspiration

Traveling, socializing and even learning (depending on what it is) is something many people like to do, so we utilized that to create a web application that will allow the user to translate on the go, translate sign language, and even generate lesson plans for a specific language!

What it does

A web application with three main features that either translates whatever you input, translates sign language into given words, or outputs a given lesson plan on a specified language so the user can learn from!

How we built it

We implemented Co:here's multilingual language search as a detection tool whenever the user inputs a text, and then used Google Translation API to translate the text into the desired output the user wants. There is also a translation log displayed using SQLite and a database, but it shows for all users (haven't figured that out). For sign language, we used OpenCV, MediaPipe and Cvzone for hand recognition and image drawing, while using Streamlit's webrtc to display the webcams point of view on the web application. We used OpenAI to generate lesson plans based on prompts and user choices.

Challenges we ran into

We ran into many hand recognition and image drawing challenges when it came to Streamlit webcam and OpenCV/MediaPipe/Cvzone syntax. It was also generally hard to understand how to create a model that will understand sign language (even though there are so many of them), since as we tried to prototype it, we realized it was to hard and the time constraint hindered us all.

Accomplishments that we're proud of

We are proud that we were able to finish the project and even deploy the application on Streamlit (the process was painfully annoying).

What we learned

We have learnt how to use Streamlit as an application framework, while also diving more into machine learning alongside OpenCV, MediaPipe, and Cvzone.

What's next for self.translate

  • a mobile version of the application for quality of life / ease of use
  • implementing translation log so that it only works for each user on their browser
  • implementing hand recognition and a model that can understand it and compare it to sign language, and then translate it to given words
  • implementing voice recognition for the translation portion

Built With

Share this project:

Updates