SIGNificant Logo: Hands form the ASL letters for SIGN

Inspiration

  • There exists sign language to text support, but it is typically limited to individual letters or words, so it is really hard to formulate ideas. We decided to iterate on that and improve the experience.

What it does

  • Train your own gesture to text models.
  • Generate natural language seamlessly from sign language.
  • Text-to-speech functionality.
  • Supports over 100 distinct gestures with 4 different modes (one hand words - one hand letters - one hand numbers - two hand words)
  • 1 hand and 2 hand support to account for more gestures.

How we built it

  • OpenCV and Mediapipe for computer vision
  • OpenAI RealtimeAPI and OpenAI TTS-1 Alloy models for AI api
  • Scikit-Learn for training models
  • Flask for server backend and HTML for frontend

Challenges we ran into

  • Learning sign language.
  • Training models and collecting data takes forever.
  • Having no front-end expert is a pain

Accomplishments that we're proud of

  • We are making an actual impact on people who are in need.
  • We challenged ourselves to integrate new technologies into our project. (eg. computer vision & machine learning)
  • The final product exceeded the initial scope we had, which was to just convert signs to sentences.

What we learned

  • ASL is hard but also fun to learn
  • Mediapipe is really cool

What's next for SIGNificant

  • More sign language options such as BSL, QSL.
  • Dynamic motion recognition (started on our motion branch)
  • Support for phone/tablets: Android/Apple support
  • Different personnality support (casual/professional)
  • Context support (Adding more context such as "I'm in a meeting, my colleagues names are...")
  • Different voices support (maybe even get to imitate the user's voice for TTS)

Built With

Share this project:

Updates