Sights

Have you ever thought how your world would be if you could not see? With darkness all around and not a thing visible around you how would you make your way through the crowds, the streets or around your neighborhood?

This is often what visually impaired people feel. Some are born this way and adapt their other senses really well but then there are some who are unfortunate and lose their vision due to an accident or a disease.

We have a solution for these special souls. With Sights you just need your camera phone and and internet connection to know what is in front of you. Just open the app in user mode and follow the instructions.

The app tells you what the back-camera of your phone faces through sound output. With a unique algorithm it determines the biggest object or the object that could turn out to be most dangerous of the available objects. For example, if it detects a bike, a car and a bus.

It would produce an output in this order - bus -> car -> bike.

Thus it would alert the visually impaired of impending danger.

With the use of Clarfai's API we have made our app accurate and efficient. Our app is ever evolving and gets better with time. And we have Clarifai to thank for that. They are continuously training their data to be more accurate and produce faster results. The sky is the limit and we are just starting. Try Sights now!

Functionality-

  1. User mode - Helps users get an audio description of what their phone's back-camera faces.
  2. Trainer mode - Helps volunteers train the model. Usage - Click a picture, speak the name of the most conspicuous object in the picture, that's it! We are developing a functionality which trains the model according to the trainer's input.

(https://i.imgur.com/56EUw5D.jpg)

P.S. - The customtrainer.py file is used to train custom_concepts for trainers. Its usage is very similar to the usage of example.py file in the python directory at (https://github.com/Clarifai/hackathon).

Built With

Share this project:

Updates