🚬 Inspiration
Cigarette butts are the most commonly littered trash in the world, at 4.5 Trillion pieces per year worldwide. The toxicity of the micro-plastics and tobacco in it causes significant harm to animal and plant life constantly and is a major source of pollution to the environment. Also, many people are not aware that cigarette butts are in fact NOT biodegradable, and that the filters in cigarettes are almost entirely made of micro-plastics.
To combat this, we came up with a three stage solution.
First of all, we created a machine learning model to identify cigarette butts in images. We found publicly available data sets to train our model.
Secondly, to improve the efficiency of our AI and also incentivize the public to help clean up the streets, we also want to create an app for general public use. This app allows people to take pictures of cigarette butts while throwing them away. We would also use these crowd-sourced photos to help train our model further.
As our final goal, we would like to use our AI model to integrate into an automated litter picking robot.
We wanted to create a solution to inspire people to clean up litter around them as well as create solutions in the long-term, and that’s when CigSweep came to life.
Here’s a short two minute video that sums up our motivation for this project:
https://www.youtube.com/watch?v=7ykcbbqsjGc&ab_channel=CBCNews
❓ What it does
The project comes in two components, a set of lightweight machine learning models trained to specifically recognize cigarette butts from images. This model is open for everyone to use as a component in their robots. The second part of our project is an application/website which is focused on collecting more data so the models can be improved. We took a gamifying approach with our application with it consisting of points which the user can earn from gathering data to create and grow their very own virtual tree habitat!
🧰 How we built it
We started by gathering two datasets of images with cigarette butts and we decided to use the fast object recognition model YOLOv5 as a basis and trained our first model. We then visualized and planned the concept of the application portion and a presentation in Figma.
✋ Challenges we ran into
It proved to be extremely difficult to write code to reformat both datasets to feed into the machine learning model, so we settled on creating an initial product with one dataset.
Along with the image recognition model, we knew we wanted to create a prototype of the crowd-sourced data collection application, but we had limited experience with prototyping and using Figma.
🏆 Accomplishments that we're proud of
We ended up with a viable project that can help us clean up the earth from one of leading types of litter; cigarette butts. Another accomplishment we are proud of is that the design team didn’t have much experience with prototyping user interfaces in Figma. Despite this, they figured out how to use Figma using some tutorials and by trial and error and they came out of it with a beautiful product!
🧠 What we learned
We learned Figma and how to prototype designs using principles of design. We also learned about some state of the art object recognition models, how to use them, and how to evaluate them.
💛 What's next for CigSweep
We want to implement the prototype that we created. Later on we hope to branch off to creating a robot that can not only detect cigarette butts but also other litter on the ground. If we use both the crowd-sourced app and our machine learning powered robots, we know we can significantly reduce the litter in the world, and make the world a cleaner place!
Built With
- ai
- figma
- google-colab
- image-recognition
- machine-learning
- python
- pytorch



Log in or sign up for Devpost to join the conversation.