Inspiration 💡
We wanted to try a simple introductory computer vision project. We then learned that Singapore has its own sign language, but little awareness or instructional resources to get people to learn sign language. Despite the new assistive technologies, sign language still has its benefits and uses for people who choose to learn it.
We aim to use our project as means of providing educational resources for those who want to learn sign language, and to encourage more people to pick up sign language, not just as means of communication with the many hard of hearing people in Singapore, but also to enhance their communication skills in general.
What it does 🔍
Uses our pre-trained models based on image data we collected to recognize basic sign language signs. Users can then learn from the hand signs shown on the screen and repeat them to the AI via web cam to check if they are doing the correct sign. Users can also quiz themselves to seee if they can remember specific hand signs for given words
How we built it 🔧
- We watched a tutorial to understand how transfer learning can be used to train the data to detect basic object, then applying what we learned to specific images we wanted to detect.
- We then learned how to export the model to be used in tensorflow.js
Challenges we ran into 🏃♂️
- There were many issues with the tensorflow installation since this was our first time using
- Started with limited knowledge of tensorflow lack of prior hands on experience
- Some issues with lighting when we were attempting to collect data to train our model
- Computer with slow GPU, making training of models take a long time, reducing the number of training steps we could run the model with in the limited time, which affected the accuracy of our model.
- Since we were a 2 man group new to object detection we were a bit short on time to finish the front end
Accomplishments that we're proud of 🏅
- Managing to collect and label the data to train the model
- Managing to get the model to work in python with the webcam
- Deciding to do a object detection project instead of a simple linear regression one like we initially planned and making decent progress on learning object detection in the process
What we learned 🧠
- Basic computer vision concept, introduction to Tensorflow and training models via transfer learning
- Learning how complicated the tensorflow installation is and how to fix it when it breaks
What's next for openSign ⏭️
- Expanding the database
- Adding the 1000 most common signs to provide a comprehensive coverage of sign language
- User profiles
- Adding profiles to allow users to keep track of their progress and share it with others
- Soundless video calls
- Users can practice communicating with other users via video calls without audio
- Competitions
- Speed and memory competitions to see how fast users can recall the signs and meanings
- Model accuracy improvement
- Adding more data and cleaning up the code to increase model accuracy
- Community features
- Learn Sign language in groups and level up your sign language skills together
Log in or sign up for Devpost to join the conversation.