Inspiration

Giving the visually impaired the opportunity to see the world more than they had before as well as offer them an easier way of navigating the world around them.

What it does

Echo Canvas is an Android app that uses the smartphone's camera to detect and describe significant objects in view for the visually impaired. It also uses an Arduino ultrasonic sensor to help identify the distance of the object from the user. Information about what is identified in the camera and the ultrasonic sensor is relayed audibly to the user.

How we built it

We used Android Studio as the main IDE to create the Android App and integrate services such as Firebase ML Vision Kit and the Arduino libraries. We built upon starting example code. We used Firebase ML Vision to help identify the objects seen in the smartphone's camera. We also use an Arduino kit to utilize its ultrasonic sensor to give us a distance reading of up to 28 meters.

Challenges we ran into

Implementing Firebase ML Vision on the Android app was difficult as we needed a way to effectively access camera images displayed as part of the preview on the phone. Efficiency was also an issue as the method we used to get an image for analysis slowed down the app. Working to integrate the Arduino sensor with Android was also a challenge.

Accomplishments that we're proud of

We are proud of the idea itself and the technology that we utilized to implement it with how much potential it has.

What we learned

Implementation of different technologies into one can be a challenge. Energy drinks can be quite interesting for those who did take a sip.

What's next for Echo Canvas

Moving towards a dedicated, standalone device that has a camera, ultrasonic sensor, speakers, and other sensors we may need to help assist the user in navigating across an area.

Notes

We have two Github links: One is the Android app with the Firebase ML Vision implementation, and the other is the Android app with the Arduino implementation.

Share this project:

Updates