https://youtu.be/SAjir40bCco

https://docs.google.com/presentation/d/1BM7ODqnmH3luwWrma0J3DJuxic9C7KZcdDwfkdv-Pww/edit?usp=sharing

Inspiration

Our motivation came from a desire to address the damage done to our environment in littering. According to our research, trash on our streets and sidewalks have always been a problem throughout the world, but also in Champaign. Although some people might spare a few hours to volunteer to pick up litter, that alone is not enough to keep up with the output. So we sought to apply our current technology to create a self-operating device capable of identifying and collecting litter, thereby contributing to cleaner and more sustainable community spaces.

What it does

Our project is an autonomous robot designed to navigate sidewalks and other spaces to efficiently collect trash. When first placed down, the cart will scan around it for trash using the camera and if it doesn’t detect any, it will gradually move forward before scanning again. Upon detection, the robot centers it within its camera frame and slowly moves towards it. Once there, we use a temporary shovel mechanism to collect the trash. Post-collection, it reverses directions to continue its cleaning journey.

How we built it

We used a mix of engineering skills to connect each component of the robot together. The core functionality is driven by a custom YOLOv8 model using a publicly available dataset. The camera output is fed into the model, which allows the robot to identify and respond to the presence of litter. We used the 3D printers made available to us to print a frame to house the camera, integrated a Coral USB Accelerator to enhance our model's inference time (from ~4-5 seconds using the CPU down to 60 milliseconds) and devised a makeshift shovel for trash collection. There is a Flask app to help visualize the detections made by the modal, where trash will be surrounded by a bounding box.

Challenges we ran into

Our journey was not without obstacles. Initially, we encountered a setback when an ultrasound sensor burnt out due to incorrect wiring. Fine-tuning the cart's movement to ensure it traveled in a straight line proved challenging due to an imbalance in motor strength and a lack of encoders, even though we sent it the same PWM signals. Once we did fine tune, a change in environment meant that we had to adjust our parameters again. We spent a lot of time trying to optimize the inference time that YOLO took. Originally, we wanted a higher resolution image for greater accuracy, but that meant that it couldn’t fit on the accelerator and ran entirely on the CPU. We settled on (352, 352) as our size for a mix of speed and accuracy. This way, the model would be run on both the accelerator and CPU for about 60 milliseconds of total inference time (TPU: 233 operations and CPU: 23 operations). Compared to 192x192, it was slower (by ~30ms), but the accuracy increased a lot. These technical hurdles, alongside time constraints, limited our ability to tackle larger or more complex litter.

Accomplishments that we're proud of

We had a lot of fun working on this project as we haven’t had much experience with hardware before and it was cool to see something actually moving from our code.. Successfully integrating machine learning with hardware to create a functional robot given our limitations is something we are proud of. Originally, we wanted to use a Tensorflow Lite library, but the Coral line from Google has been unmaintained and Tensorflow version mismatches made it very hard to train custom models. We were able to build on the existing provided framework, and also improve on it. In order to update images faster, in addition to the USB accelerator, we also implemented an extra thread along with a lock to only allow it to process detection when an image was not being processed. Overcoming the technical challenges and learning to adapt under constraints further underscore our project's success.

What we learned

This project was a great learning experience. We gained deeper insights into image recognition technologies and enhanced our proficiency in Python and various libraries. The process also honed our problem-solving skills, particularly in hardware troubleshooting and optimization.

What's next for Untitled

Looking ahead, we aim to refine and scale our project. Our current trash collection capabilities are limited and the grabbing mechanism could be more effective. We would like to enhance the precision of the robots movements by using motor encoders. Long-term, we envision deploying our solution in larger public spaces, such as parks and public sidewalks, to make a more substantial impact on community cleanliness and environmental conservation. The journey ahead involves exploring advanced mechanical designs and expanding our machine learning model's capabilities to handle a broader array of litter types.

Built With

Share this project:

Updates