Inspiration
With our teammates' diverse backgrounds, we decided that we would tackle a problem about minorities in the community, specifically people with disabilities. We noticed that it was always the minorities of the community trying to raise their voice, but the real problem was that the majority was not listening or showed no interest in listening. We created our project with those ideas in mind and created a project that would encourage and promote action to learn more about handicapped people in the community.
What it does
Our project is an educational game designed to help the user learn one of the most essential building blocks of American Sign Language (ASL): the fingerspelling alphabet. The game generates a random "grape" (containing a letter) for the player to match with their real-world hand, "catching" the grape before it hits the ground. Each grape that the player catches scores them points, with the goal being to catch as many grapes and score as many points as possible.
The program detects gestures using Google MediaPipe, which utilizes a computer's built-in webcam to track hands movement. The detection of the different hand signs was custom-programmed by us, by using the position values given for each of the 22 distinct hand "nodes" (fingertip, joint, base of palm, etc.) identified by the MediaPipe API and comparing their locations relative to each other.
How we built it
The base of the hand sign detection program was inspired by Google’s MediaPipe. Using the webcam, we were able to extract the position values of specific points on the hand (fingertip, joint, etc.). With those position values, we found what positions the fingers would be in relative to other fingers for each sign in the ASL. Thus, we would use the webcam to analyze the hand of the user and output the letter that most closely matches their hand position. For the game's UI, all graphics, colours, and animations were drawn with Tkinter. OpenCV was also used in tandem with Tkinter for the webcam feed to analyze the hand signs of the user.
Challenges we ran into
Recognizing the shape of the hand and matching it with the ASL Google’s MediaPipe helped us track the movement and shape of the hand, but it did not have built-in tools to compare to ASL signs--or, for that matter, to create custom gestures at all. We had to build a whole new program to recognize and compare the positions of each individual node in order to interpret the data for sign language.
How to connect the front end and back end/GUI
Our second problem was connecting the back end and front end together. We asked many different mentors who suggested many different methods to approach the problem, but each method presented its own new, far more complicated roadblocks. We were on the verge of redoing the entire project again in Javascript when we found a workaround that would was closer to accommodating coders at our level, but still challenging. It was to use Tkinter as the main GUI. We ran into a lot of problems with Tkinter as well, not least of all getting Tkinter animations to run at the same time as our hand recognition program. Little problems dealing with graphics appearing out of nowhere or not disappearing made GUI programming really tedious and complicating.
Accomplishments that we're proud of
An accomplishment that our team is most proud of is the GUI part of the application. We had so much struggle looking for the right way to connect the front-end and the back-end of the project that when we decided to use Tkinter we felt like half the project was finished. However, we soon later learned that GUI designing was the hardest part. After hours of struggling, asking mentors, and looking at documents, we were finally able to merge the different codes and finish the GUI.
What we learned
The biggest lesson we had is the method of problem-solving. For example, with GUI and the recognition program, one problem brought another, and we learned to tackle one problem simultaneously. Other fascinating things we learned include Tkinter, OpenCV with Tkinter, and working with Git Hub. Plus, as a bonus, each of us walked away this weekend with a new human language under our belts as well--sign language!
What's next for FingerJelly
Our team has several future ideas for FingerJelly. Currently, FingerJelly can only interpret gestures from the left hand; with more time, we would be able to program FingerJelly to recognize both right and left hand inputs. Our next future feature to add would be an Active Teaching System or ATS, which would stream the live video on the screen and include a picture overlay so that the user can try to match that picture with their hands. Another smaller feature we were also thinking of was streaming the live video of the user during the main game, plus creating different levels of the game to increase speed of falling grapes, allow for multiple lives, or even have the user spell a full word out instead of just one letter.




Log in or sign up for Devpost to join the conversation.