Inspiration
Empowering the deaf and dumb, giving voice to their gestures.
What it does
We recognize some standard ASL signs and gestures and convert them into text and voice.
How we built it
We took data from leap motion controller and extracted features for that for a Machine Learning classifier. We then build the dataset for training, and then used a python script to recognize the sign in real-time.
Challenges we ran into
Deciding which features to extract, building those feature, and making our own training data set was challenging.
Accomplishments that we're proud of
Building a dataset with more than 60,000 instances was an accomplishment. This was our first project using Machine learning and a hardware
What we learned
Learned about applying multi-class classification by making test data, and training.
What's next for ASL recognition
Adding more complex gestures which make use of dynamic motion properties of palm.
Built With
- amazon-web-services
- leap-motion
- machine-learing
- numpy
- python



Log in or sign up for Devpost to join the conversation.