Inspiration

I wanted to do something that would be of benefit and that might actually help people that use it. It's not perfect but I hope to keep tweaking and refining it until it becomes a viable and completely usable app that would make people's lives a little easier.

What it does

LetsC is a voice activated assistant that analyzes images through camera and describes those images through text to speech for the visually impaired.

How I built it

I built it using Azure's Cognitive Computer Vision API, Web Speech API for both speech to text and text to speech. Express server and javascript

Challenges I ran into

Hooking up to azure was a bit of a challenge as well as working with jquery and getting everything to tie in together.

Accomplishments that I'm proud of

I'm proud of the fact that it is currently operational!

What I learned

I learned to use computer vision API, text to speech and speech to text apis.

What's next for LetsC

Refining it until it is something that could be widely used to help. Hopefully I can contribute more of the pieces of machine learning for it in the future.

Built With

Share this project:

Updates