Inspiration

When we searched for a use case where we could utilize Alexa, we came across a blog post where blind people talked about their experiences. One of the experiences was "I have near-misses with electric cars quite often, up to a couple of times a week because I can't hear them.". Also, the city of Munich does not always provide acoustic & tactile signal generators at traffic lights which would help visually impaired people cross the streets. This is why we decided to tackle the problem down with Alexa.

What it does

Our app utilizes Alexa and computer vision to guide visually impaired people over the streets by recognizing the traffic lights and their colors. The user can as well call a relative if the app cannot recognize anything on the street. Alternatively, the app suggests another visually-impaired-friendly traffic light down the road to get to the other side of the street by utilizing the OpenStreetMap API.

How we built it

We divided ourselves into 3 different teams: the frontend, AI & Alexa, backend teams. A Tensorflow model was trained to recognize the traffic light and its color. The backend receives calls from Alexa and queries data from OpenStreetMap API. We have also set up a Raspberry Pi to record pictures and videos for analysis. For debugging, we implemented a Web-App using next-js.

Challenges we ran into

The biggest challenges consisted of doing complex interactions with Alexa and Computer Vision for a pedestrian. Most traffic datasets are recorded from the perspective of a car. Training neural networks for Computer Vision from the perspective of a pedestrian is a bigger challenger than we initially thought. It would be very helpful for Real-Time applications to control Alexa proactively (like making Alexa say "the traffic light is green, you can start walking now"). This was a challenge to figure out whether or not it was possible.

Accomplishments that we're proud of

We're proud of our good teamwork. Thanks to it, we were able to initialize a working prototype by the beginning of Saturday. We divided the tasks efficiently and were able to reach success thanks to that.

What we learned

  • How to utilize Alexa skills, how they work, what the constraints are
  • How to get, interpret, and utilize data from the OpenStreetMapAPI
  • How to efficiently work as a large team of 5 people

What's next for Handycap

  • Improve AI models for better recognition, to increase the safety of visually impaired people
  • Focus on other areas of a city life, e.g. a subway, by implementing Alexa assistance
  • Implement other assisting methods (currently it's only a voice), e.g. through vibrations
  • Design a portable hardware device with a camera (e.g. a raspberry pi with a camera and a 3D-printed cover)

Built With

Share this project:

Updates