🧐 Inspiration

Robots have improved, yet inspections and broadcasting are still manual. Organizers are exhausted, rules aren't enforced evenly, and audiences don't find watching engaging. Sentinel makes competitions safer, fairer, and more watchable by automating rule checks and delivering live play-by-play. We prove it's safe, then make it a SHOW to behold.

🤯 What it does

Sentinel is an all-in-one compliance + live commentary platform for robot competitions.

💭 How it works:

  1. Capture Build Photos

    • Contestants take 2–3 photos/scans of the robot.
  2. Run Automated Rule & Safety Check

    • Sentinel extracts a component list and detects suspected motors/sensors/modules
    • Evaluates against competition safety rules and technical standards
    • Generates a clear pass/fail breakdown
  3. Certify the Evidence

    • Sentinel bundles the photos + component results + rule outcomes into an evidence package
    • Hashes the package, anchors a tamper-evident proof on Solana, while storing full inspection records for an audit trail.
  4. Start Course and Generate Live Play-by-Play

    • Sentinel periodically analyzes frames to produce an expert report
    • Provides real-time commentary like a real broadcast for judges and spectators
      • Can clone any real commentator voice for personalized experiences

With Sentinel, we get safer competitions, fairer outcomes, and more engaging events. Most importantly:

  1. Solana turns every inspection into a permanent, tamper-evident record teams can trust.
  2. ElevenLabs turns raw runs into a live broadcast—fast, accurate, and personalized to the moment.

👷‍♂️ How we built it

Frontend:

  • Next.js + React + TypeScript
  • Tailwind CSS
  • Framer Motion
  • Browser APIs (getUserMedia, fetch, HTMLAudio, SpeechSynthesis)
    • webcam capture, API calls, audio playback, and TTS fallback

Backend:

  • Node.js + Express + TypeScript for the API server
  • Gemini API for robot inspection and live run summaries
  • Sharp + Multer for image processing
  • ElevenLabs API for real-time commentary
    • cloned-voice text-to-speech
    • low-latency audio generation/streaming
    • consistent commentator delivery
  • MongoDB for inspection logs
  • Solana for tamper-evident proof (on-chain memos)
    • @solana/web3.js for building/sending transactions
    • tweetnacl + bs58 for signing/key handling
    • Node crypto for hashing/encryption

Solana:

Used Solana as a tamper-evident inspection ledger

  • Anchored every finalized PASS/FAIL as an on-chain memo so results can be re-verified using transaction history
  • Memo program only, uses Solana’s built-in Memo program for a single instruction per inspection, low cost and simple!!!
  • Committed to full inspection evidence using a single evidenceHash (photos + timestamp + analysis + rules → SHA-256)
  • Supported privacy-friendly proofs by optionally encrypting the memo payload before writing it on-chain
  • Built a clean verification flow: tx signature + cluster-aware Explorer link returned to the UI
  • Cluster is derived from SOLANA_RPC_URL; explorer links use the right ?cluster= so verification works on both networks, safe for manager's wallet.

ElevenLabs:

Used ElevenLabs as a broadcast audio layer, not just basic TTS

  • Generated voice-consistent intros + live play-by-play with an authorized cloned commentator voice and profile-based personas
    • Supports cloning a commentator's voice in the user’s chosen tone
  • Implemented commentator personas (hype caster / energetic referee / analyst) for different narration styles
    • Supports multilingual output via language codes while keeping the same persona
  • Integrated multiple ElevenLabs capabilities in one system: TTS, STT, SFX, dubbing
  • Built one unified /api/audio surface with caching, rate limits, and job tracking for seamless generation → playback
  • Optimized for real-time with MP3 output and low-latency queueing
  • Added ethical guardrails with voice-rights certification for any cloned voice used

Python:

  • YOLOv8 for partial object detection (not enough epochs during training thus weak accuracy model, fallback to vision LLM)
  • Dataset YAML + frame extraction for training

🚀 What’s next for Sentinel

Sentinel can already thrive. We've made every inspection result tamper-evident and instantly verifiable, while turning each run into a live, voice-consistent broadcast that keeps judges and spectators locked in. But we can always do more...

  • Expand beyond robotics into a modular competition platform for events with equipment checks and live broadcast needs (archery, biathlon, drone racing, esports).
  • Add auto-highlights and one-click dispute review with timestamped evidence and verification.

🧩 Starting Point (Hardware Robot)

McQueen is a course robot built to adapt to changing track conditions, not just follow a fixed line.

  • Perception: bottom-facing color sensor reads surface color and calibrates to lighting at startup
  • Navigation: follows the black guide line and chooses red/green paths at splits
  • Recovery: if the line is lost, it probes left/right and re-centers automatically

Tech stack: Arduino Uno R4 Minima + TCS3200 color sensor + L298N motor driver + DC motors/chassis (C++ / Arduino)

Built With

Share this project:

Updates