Inspiration
Have you ever been frustrated looking for a grocery item, only realizing after that you passed by it multiple times? Ever wish you could receive store guidance without stepping out of your social bubble?
We are introducing Aisle Atlas! An interactive, computer vision companion residing right on top of your head. With its AI capabilities and convenience of use, our device allows anyone to become an "employee" of a supermarket. Through SMS messages, localization and effective mapping of grocery items, we aim to increase the efficiency and shopping experience for all.
What it does
Imagine needing to buy an item/items but you're in a rush to be somewhere else. Maybe someone you know is already at the supermarket and only a text message away. Using a simple SMS text, Aisle Atlas allows you to send a grocery list that is automatically received. The items are then mapped immediately for the other shopper to find the shortest algorithm to grab all the components, with detailed instructions to arrive at each "station".
We then use localization to determine our current position within the store and the required path to each item. Once an item has been "completed", there is a basic fingerprint sensor attached to the side of our device. With one tap, that item is no longer at the top of the queue. You can track with live feed the positioning of the shopper and updates in real-time.
How we built it
Our team wanted to implement a mix of both hardware and software. We 3D-printed a headband and support compartment, housing a Raspberry Pi, camera, batteries and a touch sensor. Our original idea was to attach the device to a hard hat, but we decided in the end to go with a sponsor bucket hat. This gave us more flexibility with materials and easier mounting conditions. We interfaced both firmware and software together to create a well-rounded project and demonstration.
We also used vision-based localization and object detection, as well as MappedIn with live location tracking.
Challenges we ran into
In terms of challenges, our original plan was to use the Raspberry Pi Module 1.3 Cameras for our detection method. These were substantially smaller and convenient to place inside different types of headgear. We were to connect and see the camera availability but had increased difficulties taking a picture. In the end, we decided it would be simpler to implement a webcam for proof of concept, but its bulkier size was a new challenge on its own.
Another issue was our SSH authentication originally. We wanted to film a video feed at a nearby convenience store, but this required both the laptop and Raspberry Pi operating on the same Wifi network. We had issues connecting the pi to hotspots and it made it difficult to wander anywhere other than HackTheNorth Wifi locations.
Some other technical challenges were debating between stability vs latency tradeoffs. Tolerancing for 3-D prints was also difficult, as mechanical and electrical parts needed to integrate seamlessly for optimal performance.
Accomplishments that we're proud of
Our team had a lot of fun working together on this project. We all had accomplishments we were proud of. In terms of mechanical products, our team gained experience with 3-D printing and other manufacturing tools such as soldering. Our team integrated a Raspberry Pi and camera as the fundamental hardware of our project. It was great to see the impacts of our software in real life. For software, we discussed lots of problems together and worked through many backend and front-end integration issues. We completed vision-only localization and mapping using the SIFT algorithm.
What we learned
A huge step forward for us was learning about ngrok for rapid deployment. We gained well-rounded experiences in mechanical, electrical and software, and each worked on components that we enjoyed. There were lots of cool "aha" moments and we were excited to have fun together.
What's next
In terms of hardware, we weren't able to implement the finger sensor effectively within the deadline. It would be great to add more ways for the user to interact. Another issue was the method by which our instructions were posted. Currently, they are simply posted to a webpage, but we would love to explore mobile apps and audio instructions for increased convenience.
Improving latency was one of the features we looked at from the software side.
Long-term, our project originally stemmed from some automating convenient tasks. This would include using a robot instead of a human with computer vision capabilities to complete all shopping tasks. There are so many additional features such as gamifying, better AI and detection capabilities, and a wider range of items to explore.

Log in or sign up for Devpost to join the conversation.