Inspiration

The inspiration behind AI Sign came from the need to bridge the communication gap between individuals who are hearing impaired or have difficulty understanding sign language and those who do not understand sign language. We wanted to create a solution that could help translate signs captured through a webcam into English text, allowing for easier and more effective communication.

What it does

AI Sign is an application that utilizes computer vision to translate signs captured by a webcam into English text. The application detects hand gestures and movements in real-time and processes them to determine the corresponding sign language interpretation. The translated text is then displayed on the screen, enabling users to understand and communicate effectively with sign language users.

How we built it

To build AI Sign, we used a combination of technologies and frameworks. Here's an overview of the key components and the development process:

Computer Vision: We utilized computer vision algorithms and libraries such as OpenCV to process the video stream from the webcam. These algorithms helped in detecting and tracking hand movements and gestures.

Deep Learning: We trained a deep learning model using convolutional neural networks (CNNs) to recognize different sign language gestures. The model was trained on a dataset of sign language images to achieve accurate gesture recognition. TensorFlow, NumPy, Pandas, Tensorflow.js, and more were all libraries that were core to the model's development.

Webcam Integration: We used libraries and APIs to capture video frames from the webcam in real-time. These frames were then passed through the computer vision and deep learning modules for gesture recognition and translation, using the model that we painstakingly worked to develop and train over the course of the hackathon

User Interface (UI): We designed a user-friendly interface that displayed the translated text in real-time. The UI allowed users to easily interact with the application and understand the sign language translations. Large and easy to read text with high contrast, alt-text in case of image download failure, and more make this website extremely accessible to a variety of users.

Integration and Deployment: Once the core functionality was implemented, we integrated all the components and tested the application extensively. We ensured compatibility across different platforms (i.e. Mac and Windows) and deployed instances of the website locally.

Challenges we ran into

Tensorflow.js and machine learning in python were extremely difficult to apply effectively within our project. We ran into a wide assortment of problems while creating, training, and deploying our ML model, from package conflicts to files that refused to download to hunting down PATH variables for newly installed software and more. Even after countless hours spent discussing this at length with some of the most technically adept individuals we met at the hackathon, it was still a monumental struggle for us to actually utilize our ML model in our project.

Accomplishments that we're proud of

We managed to actually create a website with a functional ML model!! It might not be the most accurate, but it's still atleast somewhat functional, and it comes with a beautiful UI so that our users can easily take advantage of the features our application has to offer.

What we learned

We were unaware of the existence of machine learning frameworks for javascript, and so this was the first time that our team was able to unite a beautiful front-end with a functional ML model and combine it all into one neat little webpage. Flask servers had failed us before on several occasions, so this hackathon we decided to take a chance, try a framework that we had never used before, and managed to find ways to make it all work out.

What's next for AI Sign

While AI Sign has already achieved significant milestones, there are several areas for further improvement and expansion:

Expanded Gesture Vocabulary: Enhancing the recognition capabilities of AI Sign by incorporating a broader set of sign language gestures and expressions. This would enable more comprehensive and accurate translations. We would also like to employ natural language processing so that we can connect the phrases corresponding to signs and make our transcripts sound more natural and sentence-like.

Multi-Language Support: Adding support for multiple languages beyond English, allowing users to choose their preferred language for translation.

Mobile Application: Developing a mobile version of AI Sign, making it more accessible and convenient for users to utilize the application on their smartphones and tablets.

Continuous Learning and Updates: Regularly updating the gesture recognition model and translation algorithms based on user feedback and evolving sign language expressions. This would ensure that AI Sign remains accurate and up-to-date with the latest sign language trends.

Integration with Assistive Technologies: Exploring integration opportunities with other assistive technologies and devices, such as smart glasses or haptic feedback systems, to enhance the overall communication experience for individuals with hearing impairments.

The future of AI Sign lies in continuous innovation and collaboration with the sign language community to improve inclusivity and promote effective communication between diverse user groups.

Built With

+ 1 more
Share this project:

Updates