Inspiration
Virtual home assistants like Alexa Smart Home and Google Home function solely using voice commands. But we wanted to make systems like these accessible for people who are speech and hearing impaired. This is how we came up with the idea of Signix: a sign-language-powered virtual home assistant that can help you control your household appliances with a simple signed command.
What it does
Signix uses a webcam that captures images of your hand signing in American Sign Language. It uses these gestures to perform actions like turning on and off lights, opening and closing doors, and changing the thermostat temperature.
How we built it
We use a webcam and HTML canvas to capture the hand signs. The Google Cloud AutoML Image Classification model is used to identify these signs. After the model predicts the user's intent, we send this information over WiFi to a CC3200 launchpad. This performs actions like turning an LED on or off (imitating a light bulb), closing and opening a prototype door using a servo motor, or displaying a thermostat temperature on the PuTTY if the user wants it hotter or colder.
Challenges we ran into
The biggest challenge we faced was connecting the different components of our project together. We had some difficulties working with different image formats and sending the captured image from our front end to our machine learning model. Another challenge we faced was connecting the application with our hardware, and sending the output of our trained model to the launchpad.
Accomplishments that we're proud of
Our biggest accomplishment was overcoming the challenges that we faced integrating the hardware and software. We were also able to train our classification model with a precision and recall of 100%. We are proud to be able to include diversity in our model by building our dataset with people from various genders, races, and skin complexions.
What we learned
We learnt a great deal about embedded systems. Even though getting the hardware to connect with our software was a challenge, we learnt a lot about how to transfer data between our web UI and the hardware launchpad. Through this project, we also learnt quite a bit about the American Sign Language!
What's next for Signix
We are really looking forward to expanding Signix to make it even more diverse and user-friendly. Firstly, we want to enable the web UI to record a video stream so that we can capture motion rather than static images. Secondly, we want to expand the range of appliances that can be controlled through our project. We are very excited about the applications that Signix can have on speech and hearing impaired communities.
Built With
- automl
- c
- cc3200
- css
- energia
- express.js
- html
- javascript
- node.js
Log in or sign up for Devpost to join the conversation.