Inspiration
A person on our team has a cousin who lives in America and is deaf. Whenever he visits her abroad, he has previously found it hard in the past to communicate with her due to the barrier of sign language. Him and the rest of the team also wanted to work on a computer vision project, so it was a great opportunity for all of us to learn more about sign language while also familiarising ourselves with training and building computer vision models utilising machine learning methods in most of the team’s first ever hackathon.
What it does
Our app recognises hand gestures which are associated with American Sign Language characters, and we’ve allowed people to learn this language by quizzing them on random characters of the alphabet.
How we built it
We started our brainstorming sessions by drawing a spider diagram, highlighting all of the ideas that we could potentially implement and link into the themes and challenges part of the hackathon. We all discussed what programming languages we had prior experience with to guide the project we should take on. Some of the languages included Python, Java, JavaScript, C# and Swift. However, what made us go for a project which takes on languages and frameworks none of us were familiar with was a cause related to a family member of one of our hackers. We saw the potential to create an application enabling a large user base to overcome this major issue and collectively decided that this would be the problem we would aim to solve.
Challenges we ran into
We weren’t as familiar with many of the languages and frameworks we used in this hackathon, more specifically the machine learning frameworks we used like TensorFlow. Through studying documentation and online resources, we are able to form a baseline understanding of how to do certain things.
We tried to convert a TensorFlow model and a Scikit learn model to CoreML to natively run our machine learning model in our iOS app, however it turned out to be rather difficult and time consuming due to our lack of prior experience and issues with setting up the development environment.
We faced some accuracy issues with our model when we trained the model with our own training data. We improved on this by sourcing images from third-party datasets to train our model with.
Accomplishments that we're proud of
- Creating our own images to be used in the dataset as a good starting point to familiarise ourselves with machine learning.
- Building on our own dataset by sourcing third-party images to improve the accuracy of our model.
- Attempting to achieve our end goals in short periods of time while wrestling with various supporting modules in the framework.
What we learned
- We learnt how to create an AI model, using images and leveraging collections to create a dataset designed for training a model
- We gained a better understanding of the ASL alphabet.
- Through the pursuit of developing a solution which would integrate into our app without any issues, we gained experience with TensorFlow, MediaPipe, OpenCV and many other modules and frameworks in Python, though we finally settled on Apple CoreML.
- The value and importance of teamwork.
What's next for ASL Buddy
We hope to increase awareness of sign language and to make learning sign language more accessible to the world. To achieve this mission we would need to implement features in future such as supporting other sign languages, e.g. British Sign Language, Chinese Sign Language, etc.
Ideally we want to also support word recognition, and to integrate a database so we can enable sign language enthusiasts worldwide to communicate with one another, while also keeping track of their own progress.
Built With
- coreml
- swift


Log in or sign up for Devpost to join the conversation.