Inspiration
This project was inspired by a team member's childhood friend whose mother is hard of hearing. This individual teaches American Sign Language on Long Island and, growing up, would often speak about how it would be great to have a mainstream way for everyone to learn sign language. This woman inspired us to work on Henlingo to allow for a more integrated society with hard-of-hearing individuals being more able to integrate into a world that, although it has been increasingly accepting, has not entirely embraced all disabled communities.
What it does
Henlingo is a web app that gamifies image recognition technology to parse the correctness of a user's answers to see if they are making progress. And through their progress, users can level up and unlock new lessons to keep learning.
How we built it
We initially built a design for the frontend in Figma which we actualized using Tailwind, Node.js, and SHadCn. For the backend we used TypeScript and TensorFlow.
Challenges we ran into
We ran into challenges with the frontend of the website with certain things we wanted to integrate from the initial design that ended up not working out. But our biggest challenge was making sure that the time taken to capture the users answer wasn't so long and integrating this into the backend.
What's next for Henlingo
We plan to integrate more lessons and expand our ASL knowledge in order to better help users to learn. We eventually want to work it up to lessons teaching full conversations and deploy.
Built With
- figma
- next.js
- shadcn
- tailwind
- tensorflow
- typescript



Log in or sign up for Devpost to join the conversation.