Inspiration
One day one of our members asked if he could get the computer to scroll using his face. At the time he was joking, but we realized that we could expand on the idea and bring it to reality.
What it does
FaceIt! tracks the movement and various properties of facial features to recognize a wide array of hands free gestures. Our program was built with ease of use in mind for a wide audience. FaceIt! can be used as an accessibility feature for the handicapped, or just for convenience.
How we built it
We decided to use Python as our language of choice for its simplicity and because all our members are familiar with it. We had heard of OpenCV and its capabilities before, so we immediately gravitated toward it for our vision processing workload. We used Haar-feature based cascade classifiers to recognize several different facial features such as eyes, the mouth, and the outline of the face as a whole. We then used other OpenCV capabilities to track those facial features and recognize different gestures such as face tilting, turning, and winking. We used the pyautogui library to translate gestures into actions executed on the computer and used Google's speech-to-text API for translating user speech.
Challenges we ran into
We ran into struggles along multiple steps of the process, but the major ones we encountered were difficulty installing OpenCV and obtaining satisfactory
Accomplishments that we're proud of
We are proud that we were able to complete a project which we considered ambitious - almost too much so - when we began this event.
What we learned
We all gained experience with OpenCV and working on a full project from conception to delivery in such a restricted time frame, which none of us have ever done before. Some members of the team also learned to use Git for version control for the first time and we all gained experience using Python in an environment we had never encountered before.
What's next for FaceIt!
We plan to improve the accuracy of face tilt and turn recognition using the user's eyes as additional reference points. We also plan to change the way different gestures interact to prevent the possibility of multiple gestures being inadvertently recognized at once and refine the algorithm we use to implement the auto-lock and auto-unlock features to eliminate false negatives on face recognition.
Log in or sign up for Devpost to join the conversation.