Inspiration

We were inspired by the emotional impact of losing a pet and wanted to create a tech-powered solution to help reunite animals with their families. Combining edge AI with remote sensing felt like the perfect fit for this challenge.

What it does

Raspberry Pet detects cats and dogs in real-time using a Raspberry Pi and Coral TPU. When a pet is spotted, it captures images and a short video clip, uploads them to Supabase Storage, and logs the detection for review in a web app — all without human supervision.

How we built it

We retrained and tuned a TensorFlow Lite object classification model (MobileNetV2) on a dataset of labeled pet images. The model runs on the Coral Edge TPU for low-latency inference. A motion detection pipeline using OpenCV on the Pi triggers the camera, processes detections, and uploads media and metadata to Supabase. The frontend is built with Flask, and detections appear in a live dashboard.

Challenges we ran into

• Supabase connection and permissions from the Pi
• Integrating Coral TPU inference with real-time camera streaming
• Packaging and uploading media efficiently on weak network connections
• Making the system run fully headless, unattended, and robust to reboots

Accomplishments that we're proud of

• UI
• Use of Raspberry Pi and Coral TPU
• Concept and scalability for the future
• 

What we learned

• How to optimize computer vision pipelines for embedded hardware
• The value of event-based inference (motion-triggered) to save compute and bandwidth
• Supabase is a surprisingly capable backend for IoT use cases
• Edge ML can be powerful and developer-friendly with the right tools

What's next for RaspberryPet

• Add GPS + geotagged detections
• Upgrade to object detection (bounding boxes) instead of classification
• Integrate user notifications and lost pet matching in real-time
• Extend to wildlife monitoring and rescue scenarios

Built With

Share this project:

Updates