What it does
Aisle Vision consists of front-end web platform, a back end system to process and update live information, and a hardware prototype to demonstrate its functionality. A robot traverse store aisles, stopping to capture images of products and determining what those products are using machine learning. If the image is recognized as the item that is supposed to be on the shelf in that location, all is well and in stock. If the image is recognized as a different store product, it means that a customer has misplaced an item there. If nothing is recognized, the platform will inform the user that this item in question is out of stock.
How we built it
We used Google Cloud's Vision API to train a custom model to recognize a set of sample store products, as well as Android Studio to build an app that would allow us to use a smartphone as a camera on our robot, driven by a Raspberry Pi.
What we learned
All of us wanted to learn something new at Hack Western! We all had the opportunity to work with technologies that we had no prior experience with, such as Google Cloud, Android Studio, and Raspberry Pi and hardware.
Built With
- android-studio
- google-custom-vision-api
- html
- java
- javascript
- php
- python
- raspberry-pi
- scss

Log in or sign up for Devpost to join the conversation.