Inspiration
Working as a part-time elementary school teacher for Computer Science has shown me the challenges of teaching. A large part of the curriculum is age-appropriate "Khan Academy"-style videos with me present as the tutor constantly working to fill in the gaps. Each and every child is different and would likely benefit from individual tutoring, yet, I primarily work in Fairfax County Public Schools, one of the wealthiest public school systems, and ideal class sizes are still not affordable. Classes with as few as ten students produce unsatisfactory results as one tutor cannot help everyone to the extent they deserve during allotted class times. Being digital, AI Tutor could help anyone, worldwide with just a camera on their phone or laptop. Tutoring time would then be able to be spent more towards more difficult concept problems, boosting educational results for children.
Smarter app-based AI therapists (which exist and are in-use, i.e. WoeBot) need to be able to recognize human emotions and body language so they may better treat people.
Issues with passive videos include monotony, boredom, lack of relevancy, and a poor one-size-fits-all approach. AI Tutor helps improve learning by changing videos instantly on the fly to combat deficiencies in learning. If we all watched every video on Khan Academy we would become incredibly educated. Yet, even though Khan Academy costs nothing, we don't. Why? The videos are boring. I dream of AI Tutor taking off and improving sites like Khan Academy since their current approach is often a single video somehow intended for each individual person on the globe.
What it does
AI Tutor recognizes minute facial expressions to dynamically tailor videos and other media on the fly to greatly enhance the learning experience and customize the experience for each individual child in a new way.
How I built it
C# with Visual Studio as an IDE. Google Cloud Vision powers the facial recognition and OpenCV wrapped by Emgu was used for accessing the webcam and capturing stills.
Challenges I ran into
Issues and internal bugs found in Google Cloud libraries, OpenCV, and Visual Studio caused failure to recognize expressions other than happiness and lack of a GUI other than what is printed in the console. Time constraints led to no recorded video content. The only wildly available biometrics is the webcam.
Accomplishments that I'm proud of
Using an API for the first time and actually having a working demo!
What I learned
I learned how to code better and used an API for the first time!
What's next for AI Tutor
Better recognition/understanding of facial expressions, bug fixes, speed increases, and more.

Log in or sign up for Devpost to join the conversation.