Inspiration

We were inspired by the idea of giving AI the ability not just to see the world, but to express what it sees. We wanted to transform object detection from a purely analytical task into a bridge between perception and creativity—where technology observes daily life and turns it into art and music.

What it does

Object GO uses a YOLO-based model to detect everyday objects in real time. Each detected object and its confidence value feed into art and sound generation APIs, producing visuals and music that reflect the scene. The web interface displays both the live detections and the generated outputs. Users can click on any detected object to see its mapped symbol, letter, or number appear as text, creating an interactive creative experience.

How we built it

We first got the YOLO model running to identify and output objects in real-time. A custom Python backend manages confidence tracking, list updates, and image refreshing. The website interface, built with Flask and JavaScript, connects to the YOLO output and displays both live camera feeds and the generated art/music.

Challenges we ran into

Connecting multiple APIs while maintaining real-time synchronization between detection, display, and creative output was a major challenge. We also had to manage frequent confidence updates efficiently and ensure smooth front-end interaction without slowing down inference or generation.

Accomplishments that we're proud of

We successfully built an end-to-end system where AI doesn’t just detect but creates. The integration of detection, interaction, and multimodal generation was a breakthrough that made our concept—AI with artistic expression—come alive.

What's next for Object GO

Next, we’ll expand to support custom object sets for training new YOLO models, enhance the interface for multiple simultaneous objects, and refine real-time sound adaptation. We also plan to add personalization—letting users choose art styles or musical genres—making Object GO a truly interactive AI art platform.

Built With

Share this project:

Updates