Having to perform and function in a foreign space is anxiety inducing enough as able bodied people. The thought of facing this challenge as someone with a visual impairment seems next to impossible without guidance. T
Lo-Kate is an android app that allows users to verbally express the object they are looking for and uses the camera to locate the requested object and direct the user towards its location relative to themselves.
First we used Android Studio and its speech to text conversion library to create an app that allows users to speak aloud the object they seek and convert their message to text while identifying the name of the object from their request. Next, using the open source machine learning framework TensorFlow we altered it's Object Detection API only identify
The biggest challenge we ran into was a compatibility issue when integrating both, object detection and speech recognition, softwares together. Android Studio was only working on two laptop, we used the pair programming approach to efficiently used our resources.
What seemed to be an impossible project to do within 36 hours, we managed to divide and conquer by efficient team work--> work flow: different phases, list of task by priorities.
How to handle threading in Android Studio
Hardware implementation