Inspiration
Over 65% of the population are visual learners, and the improved technology of smartphones and AI can facilitate a whole new way for translating and learning a foreign language
What it does
aiSHO lets you quickly and visually translate objects around you in real-time. You can also save those translations for learning retention.
How I built it
The aiSHO app was built with Java in Android studio, using Tensorflow for the image detection. We additionally tested several other models for objects recognition built in Python 3 using the Tensorflow object detection library. We tested Single Shot MultiBox Detector and Faster-RCNN architecure models, each of which are based on cutting-edge research. The convolutional networks were variations of Google's Inception Net, a state of the art architecture for image processing. Note: Models were pre-trained and built, as we would not have enough time to train them at a hackathon.
Challenges I ran into
As our app ran on mobile, we had to search through many different libraries and models to find what would give a good balance between performance and timing. Setting up the Tensorflow Object Detection library was also difficult, as it required matching the versions of many different libraries and packages.
Accomplishments that I'm proud of
The app works! aiSHO uses state of the art object recognition techiniques that were only developed in the last couple years and puts them to practical use!
What I learned
We learned more about different model architectures for objects recognition including Single Shot MultiBox Detecters, R-CNNs, Faster R-CNNS, and Masking R-CNNs.
What's next for LiveObjectTranslator
We would like to build and train our own models on the ImageNet database for a more fine-tuned and streamlined experience.
Built With
- android
- android-studio
- cuda
- firebase
- google-cloud
- java
- jupyter-notebook
- python
- tensorflow




Log in or sign up for Devpost to join the conversation.