Inspiration
There are mobility tools that aid visually impaired people, but they all have their inconveniences. A guide dog doesn't know where you want to go until you direct them. A physical stick to probe the nearby area also has its inconveniences. We decided that this problem could be solved using image recognition technology.
What it does
The app allows the user to use the phone's camera to observe the area in front of them. The camera retrieves an image which is then processed using image recognition technology. The user is then notified by voice if there are any obstacles, or if it is safe to proceed.
How we built it
We built the Android app using the Android Studio. We used Google Firebase for the image labeling API.
Challenges we ran into
One major challenge we faced was getting the app to use the Camera API with the functionality we wanted. For the use case of the application, we ideally wanted it to process the data and send back results in real time, but none of the provided APIs had support for that particular application. We quickly discovered how to take pictures, but could not automate this process and send it in the background. We also had some trouble setting up the Vision API from Google Cloud for image detection.
Accomplishments that we're proud of
Some accomplishments we're proud of were that most of us haven't used Java since a really long time and we quickly were able to get the hand of it to be able to code Java in Android Studio. Also seeing the ideas we imagined come to life was a really big achievement.
What we learned
We learned a ton of things in the last 36 hours, like how to devise a plan on how to tackle an issue we wanted to solve. We created steps of development for it. We further enhanced our skills on time management and efficiency, we were able to handle tasks we had to face with grit. During the hackathon, we learned new technologies in a short amount of time, and how to persevere in times of frustration.
What's next for EyeGuide
The visions we can see in the near future for EyeGuide is enhancing the AI to identify more valuable things in the road, like traffic signal lights, to help further improve the user's experience. We are also planning to implement a physical harness that could allow the person
Log in or sign up for Devpost to join the conversation.