Inspiration

Our world puts blind people at an automatic disadvantage. This limits them from feeling the same life experience as someone who can see. This is why we wanted to take advantage of technology to give blind people a sense of surroundings just like seeing people have.

What it does

Essentially you click on any space on the screen to take a picture. This makes it easy for blind people to use the app. They don’t have to

How we built it

We used expo and react native to build this app. We used the Google Cloud API, Text to Speech API, and OpenAI API to build this program. Using the camera module in expo, we were able to take a picture and then send that picture to the Google Cloud API. This API returned key words from the image that we then used open ai to build a description out of. Finally, this text created by open ai was then turned into speech using the Speech API.

Challenges we ran into

Learning how to use the new APIs were pretty difficult and required a lot of research. Using the open AI and connecting that to return back to our program was a difficult task.

Accomplishments that we're proud of

Our ability to make it accessible for blind individuals and make the design modeled around their needs was a big goal in our UI. Also learning the new APIs was a big accomplishment as they were pretty challenging.

What we learned

We learned how to use the new APIs of open ai and text to speech API.

What's next for Envision

Our current model is pretty simple and does not always provide a full description of the image. Maybe in the future if we were able to train the AI to focus on specific things in the image it would better reflect the descriptions we want.

Built With

Share this project:

Updates