Inspiration

This year has been very different from what we are all used to. With stores, malls, restaurants and sports all closing, we had to find new hobbies and ways to keep entertained. One of these ways that has become extremely popular is learning a new language. While there are many apps such as duo lingo that offer this service, there is considerably less apps that teach you American Sign Language! Those that do usually have repertoire of videos, but they miss one important thing - immediate feedback! Our app delivers on the opportunity of this growing market and the need for rapid feedback in a fun, easy and engaging way!

What it does

Our app - ASLPlay - teaches you the ASL alphabet by first showing you a video that goes over a small section of the alphabet. There is then an interactive game that follows that asks you to sign one of the letters that were previously taught. You can either choose to upload a picture or take one. This picture gets sent over to our machine learning model in azure which determines if the correct letter/ hand positioning was done. If you get the majority of the questions in that quiz correct... CONGRATULATION! You've just received a badge for your efforts! All your badges can be viewed on your profile! Once the quiz has finished, another video is shown (that covers the next part of the alphabet) followed by another quiz, and so on so forth.

How I built it

This app was built with React, HTML, CSS, Bootstrap for the front end. We used Azure's AutoML to train on images and deploy a LogisticRegresion classification model. We then send requests to this model to predict the label on the user's captured image.

Challenges I ran into

While we faced several challenges (such as understanding how to send requests to a deployed model to get predictions, capturing an image from the user's web came, and connecting the different elements of the project together) one that stood out was sending the image to the machine learning model. The images that were used to train were 28x28 pixels and flattened to an array of 784 columns each containing a grey scale values 0-255 for one of the pixels. This means that we had to convert the image taken from the app into a similar format and then feeding it into the model for a response.

Accomplishments that I'm proud of

The beautiful front-end interactive display and the ability of the web app to take pictures from the user's web cam. As well, this was my very first Hackathon. Normally, I might have shy-ed away from trying things I was uncomfortable with, but this weekend I played around with many things (like the cloud) that I was completely unfamiliar with. Although I am by no means an expert now, I was able to pin-point weaknesses and find a direction for improvement which I wouldn't have been able to do otherwise.

What I learned

We learned about the power of Azure and data pipelines, interacting with deployed Machine Learning models, and collaboration between back end and front end developers.

What's next for ASLPlay

ASLPlay has many potential routes for the future! Since ASLPlay only covers the ASL alphabet we can add more modules (e.g. fruits, farm animals, vehicles, etc.). To make it even more game like, we can lock these modules at the start. To unlock them, collect all badges from the previous module. Another potential route could be to include a "never ending" game. Similar to the quizzes, you would be asked for a letter, take a picture and receive feedback (correct or incorrect). In this mode, however, you have 5 lives. For every incorrect answer, you lose a life. The game ends when you have no lives left. You can then try to beat your high score, compare high score with your friends and even earn badges for passing milestones (e.g. high score of 50/100/200). ASLPlay could also be a good addition to any of the currently-existing ASL apps. By combining their repertoire of informational videos and lessons with ASLPlay's fun games and quizzes, audience engagement could drastically increase.

Built With

Share this project:

Updates