-
-
Lingclusive main page with it's playful and colorful theme
-
The main feature of the app, multination sign language live translator
-
About us
-
Mobile - The main feature of the app, multination sign language live translator
-
Mobile - About us
-
Mobile - Sidebar
-
Mobile - Lingclusive main page with it's playful and colorful theme
Inspiration
The inspiration for Lingclusive stemmed from a profound desire to bridge the communication gap between the deaf and hearing communities. Witnessing the everyday struggles faced by deaf individuals in engaging with others sparked a commitment to develop a solution that promotes inclusivity and accessibility. Our goal was to create a tool that not only facilitates communication but also fosters understanding and empathy.
What it does
Lingclusive is a real-time sign language translation tool that allows users to communicate seamlessly across language barriers. By utilizing advanced hand landmark detection and gesture recognition, Lingclusive translates sign language into text and speech in real-time. Users can simply use their device’s camera to capture their hand gestures, and Lingclusive will instantly provide the corresponding translation, making communication effortless and inclusive.
How we built it
Lingclusive was constructed using a combination of cutting-edge technologies and meticulous planning. We employed Mediapipe for precise hand landmark detection and OpenCV for robust image processing. PyTorch was integral in developing our gesture recognition model, which was trained on a dataset of hand keypoints to ensure accurate predictions of sign language gestures. The front-end of Lingclusive was crafted with HTML, CSS, and JavaScript, ensuring a seamless and intuitive user experience. On the back-end, we used Flask to handle prediction requests efficiently.
Key Steps in Development:
Data Collection and Preprocessing: We collected a comprehensive dataset of hand keypoints representing various sign language gestures. This data was preprocessed to enhance the accuracy of our model.
Model Training: Using PyTorch, we trained our model on the preprocessed dataset, fine-tuning it to achieve optimal performance.
Integration: We integrated the model with Mediapipe and OpenCV to enable real-time gesture recognition. The front-end and back-end were connected to facilitate smooth communication and user interaction.
Testing and Optimization: Rigorous testing was conducted to identify and resolve issues. We optimized the model to run efficiently across different devices, including smartphones.
Challenges we ran into
One of the most significant challenges was ensuring the accuracy of gesture recognition in real-time. The variability in hand shapes, orientations, and movements required our model to be highly adaptable and precise. Optimizing the performance of our model to run smoothly on various devices, especially smartphones, was another hurdle we had to overcome. Integration of different components posed its own set of challenges, necessitating careful coordination and troubleshooting. Additionally, we had to address issues related to user interface design, ensuring that Lingclusive was intuitive and accessible to all users.
Accomplishments that we're proud of
Training AI for Sign Language Recognition: We're proud of successfully training our own AI model to recognize sign language gestures, paving the way for inclusive communication tools.
Exploring Uncharted Territory: As members of the hearing community, we're proud to have ventured into the world of sign language technology, pushing boundaries and fostering understanding.
Achieving Milestones: We're proud of our journey with Lingclusive, from concept to implementation, demonstrating our commitment to accessibility and innovation.
What we learned
Throughout this project, we delved deep into the world of sign language, uncovering the complexities involved in translating it accurately and in real-time. We gained a comprehensive understanding of machine learning, computer vision, and natural language processing. The project also emphasized the importance of user-centered design, teaching us to prioritize usability and accessibility in our development process. Collaborating as a team, we learned the value of diverse perspectives and the power of collective problem-solving.
What's next for Lingclusive
Looking ahead, we aim to further enhance Lingclusive with the following initiatives:
Expand Language Support: Introduce support for more sign languages to cater to a wider global audience.
Enhance Accessibility: Improve accessibility features and user interface design based on continuous feedback and usability testing.
Integrate AI Advancements: Explore advancements in AI and machine learning to enhance gesture recognition accuracy and speed.
Community Outreach: Partner with organizations and communities to promote Lingclusive and expand its reach.

Log in or sign up for Devpost to join the conversation.