Skip to content

eliashomsi/Lo-Kate

Repository files navigation

This is our CodeJam 2018 machine learning project

Inspiration

Having to perform and function in a foreign space is anxiety inducing enough as able bodied people. The thought of facing this challenge as someone with a visual impairment seems next to impossible without guidance. T

What it does

Lo-Kate is an android app that allows users to verbally express the object they are looking for and uses the camera to locate the requested object and direct the user towards its location relative to themselves.

How we built it

First we used Android Studio and its speech to text conversion library to create an app that allows users to speak aloud the object they seek and convert their message to text while identifying the name of the object from their request. Next, using the open source machine learning framework TensorFlow we altered it's Object Detection API only identify

Challenges we ran into

The biggest challenge we ran into was a compatibility issue when integrating both, object detection and speech recognition, softwares together. Android Studio was only working on two laptop, we used the pair programming approach to efficiently used our resources.

Accomplishments that we are proud of

What seemed to be an impossible project to do within 36 hours, we managed to divide and conquer by efficient team work--> work flow: different phases, list of task by priorities.

What we learned

How to handle threading in Android Studio

What's next for Lo-Kate

Hardware implementation

About

CodeJam 2018

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages