Inspiration

Notes: We are doing the "Option 2: The "Double Threat" Approach". As well, there are 7 contributors in the GitHub Repository because one of our teammates Owen Yang has 2 different accounts. One being "owenyang2" and "LightDotz".

Our team of six is inspired by the challenge of bridging the gap between abstract code and tangible impact. With a collective background ranging from FRC World Championship engineering to community advocacy, we share a fundamental love for hardware as a tool for real-world problem solving. We've always been excited by Arduino and electronics, in which that curiosity pushed us to explore how far we could go by combining sensors, motors, and control logic into a single working robot.

What it does

A robot designed to complete a Winter Olympic-themed obstacle course via color-sensor line tracing. The project combines custom hardware assembly with real-time sensor logic to navigate the course efficiently. Our robot navigates the mini obstacle course while following the coloured line, using a colour sensor. Along the way, it detects specific colours and reacts to them, as well as using distance sensing to avoid collisions. We also built a website where it can evaluate robots performance based on test data. It is in the software folder of the GitHub Repository.

How we built it

We developed the firmware with Arduino IDE, the colour sensor, the ultrasonic sensor, the DC Motors driven by the L298N Driver Board, and the servo for the claw. We first individually tested each of the components (sensor, motors, and servo), and combined them together.

Challenges we ran into

Motor tuning took a lot of trial and error, especially balancing speeds so that the robot can follow the path instead of veering or spinning. We also dealt with mechanical issues such as axles and wheels breaking under testing and load, as well as an IR sensor that struggled to distinguish white from other surfaces, which is why we moved to using the colour sensor instead for line tracing and decision making. Finally, getting the robot to make consistent turns and reliably stay on track of the line after obstacles required several iterations of the robot.

Accomplishments that we're proud of

We are proud of successfully integrating multiple sensors into a cohesive system that can "see" and react to its environment. Despite the mechanical setbacks with the axles, our team managed to repair the hardware and achieve consistent color detection and movement. We are impressively proud of the wheels being able to turn properly to a level and react to colours. Since the axel kept breaking which took time off of the project, it was successful.

What we learned

This project served as a deep dive into the concepts of line tracing and the technical requirements for calibrating sensors in a dynamic environment. We learned the importance of structural reinforcement in hardware and how to effectively coordinate a team of six to merge software logic with physical assembly under a tight deadline. More specificallly, we learned about how different line following strategies work, including simple bang-bang control vs. PID, the implementation of bang-bang control, the usage of the sensor and its effects on its strategy algorithms. We gained hands-on experience using ultrasonic and colour sensors, by discovering how testing and implementation steps works. Finally, the usage and powerfulness of the FSM (Finite State Machine) can be used to structure robot logic into clear, debuggable stages. Even though it was our first time implementing this with errors, we still decided to push through implementing it and ensuring that overall on the hardware side, what is the importance of solid mechanical design for reliability.

What's next for CC-John

The next steps for CC-John are to refine the line tracing, ensure that our motors are consistent, and possibly go beyond bang-bang control. We also plan to upgrade the wheels and axles for better traction and durability, add more reliable servo-driven mechanisms, and make obstacle avoidance smarter so that the robot can gracefully leave the line and then reacquire it without manual tuning.

Built With

Share this project:

Updates