RecycleAI
Introducing Our Initiative, recycle.AI!
Our Initiative
Recycle.AI is a multiphased initiative utilizing modern technologies such as Machine Learning, robotics, and game development to encourage the responsible usage and consumption of our natural resources around the world. We noticed that most of recyclable materials and products are not actually recycled but rather thrown into landfills. Note that roughly 80% of rubbish in landfills is recyclable which is honestly, way too much! Our initiative focuses on the youth, households, organizations, and the government, aiming to encourage recycling amongst our local and the global community.
Youth Phase
Introduction
The youth phase is a recycling-based game where children can score points by correctly identifying if an object is recyclable or not, helping them understand recycling from a young age. The game is built for any setting and simply works by clicking the right bin for the item that is to be disposed of. It can be used to teach children how to recycle in classrooms or can be an educational activity children can do with their parents.
How it is built
Using C# and the unity game engine, the CAD was made in Autodesk inventor
Households and Society Phase
Introduction
We built a tool targeted at small organizations and households that can identify if an object is recyclable or not. The tool is implemented on our website where the user can read about our mission as well as utilize our tool to ensure they are disposing responsibly
How it is built
Using Machine learning and HTML, namely using the tf.keras framework to build a convolutional neural network, and the Flask API to connect the python with the HTML. The essence of it is that we trained a Deep Convolutional Neural Network to classify images using a dataset and label them based on one hot encoded values.
Issues and how they were overcome
The main issue regarding the performance of the network was addressed when we increased the size and shape and made the network far bigger, however we did not get enough time to trial and error the design so we were not able to improve on our second iteration. The padding of the images inputted by the user. This was fixed using a Pillow implementation that added padding:
`` python
if test:
inputData = Image.open('test/'+testfile)
else:
inputData = Image.open(testfile)
desiredSize = (512,384)
im = inputData
old_size = im.size
ratio = float(max(desiredSize)) / max(old_size)
new_size = tuple([int(x * ratio) for x in old_size])
delta_w = desiredSize[0] - new_size[0]
delta_h = desiredSize[1] - new_size[1]
padding = (delta_w // 2, delta_h // 2, delta_w - (delta_w // 2), delta_h - (delta_h // 2))
new_im = ImageOps.expand(im, padding)
im = new_im.resize(desiredSize, Image.ANTIALIAS)
im.show()
inputData=im
The model
The model can be seen below:
model.add(layers.Conv2D(32, (4, 4), activation='relu', input_shape=( 384,512, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (4, 4), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (4,4), activation='relu', input_shape=(384, 512, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (4, 4), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (4, 4), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(6))
this is a multilayered convolutional neural network using 4x4 filters and the relu activation function, our loss metric was Mean Squared Error.
Society Phase
Introduction
This concept involves utilising robotics to effectively locate and manouver small, recyclable materials to recycling bins, as they are often not as readily avaliable as regular dustbins. These robots should be autonomous, however as of now the robot has just finished construction, it is able to pick objects with up to 8 inches in diameter awith the idea being to install a bin bag in the large empty space to store the objects.
How was it built
As evident in the CAD file it was built using the VEX Robotics V5 system, as of now, the components do not have the computational power to fully impliment an algorithm as computationally intensive as YOLO, and so we chose not to try port it to the system.
Complications
The robot only finished construction about 5 minutes before the video was being made and so it could not be showcased fully, but the CAD renders are avaliable on this page.
Quick overview
-The intake flaps increase the contact between the target and the bot -The rubber treads on the intakes increase the traction of the intakes -The 8:1 gear ratio of the drivebase ensures the robot operates at maximum speed and efficiency
Future plans for the robot
The final aim of this phase of the initiative is to implement the YOLO algorithm alongside the robot. This algorithm draws bound boxes around the images it is interested in in real time. The next step would be to implement PID controls so that the robot is able to meet its target without overshooting, by slowing down as it approached, or alternatively, using a gyroscope, or even odometry(i.e. position tracking) to maintain the robot's movements
Log in or sign up for Devpost to join the conversation.