PitWit - Devpost Submission

Inspiration

In high-stakes motorsport, pit-entry timing can make or break a race. A call that's even a second too early or too late can cost crucial track position. Race engineers often rely on instinct and radio chatter, not precise data, to decide when to arm the pit entry signal or anticipate rival stops. The result? Lost seconds, missed undercuts, and uncertain communication under pressure.

What it does

PitWit is a real-time pit-entry decision engine that gives engineers a data-driven edge. It continuously analyzes live telemetry, including speed, lap distance, and track geometry, to predict the optimal moment to arm the radio for a pit call. At the same time, it monitors rivals' telemetry to forecast when they are likely to box next. The goal: transform uncertainty into precision, turning "pit pressure" into an advantage.

The 2 key components are:

  1. Call-Point Timer

    • Predicts the optimal pre-arm moment before pit entry.
    • Uses lap distance, speed, and pit entry geometry to compute remaining time until the pit line.
    • Models both instantaneous and integrated speed profiles for accurate countdowns.
    • Outputs real-time status tiers, GREEN, AMBER, RED, or LOCKED_OUT, guiding when it's safe to make the call.
  2. Rival "Boxing" Classifier

    • Anticipates when a competitor is likely to pit next.
    • Uses features like tyre wear, stint progression, pace trends, & others and race context.
    • Provides early warnings before official timing or visual confirmation.
    • Helps teams position strategically for undercuts and overcuts.

How we built it

Architecture: We built PitWit as a distributed microservices system with 5 core components, orchestrated through Docker Compose for seamless deployment:

1. Call-Point Timer Backend (Rust + WebSocket)

  • High-performance Rust service handling real-time telemetry ingestion at 5Hz
  • Implements trapezoidal integration over speed profiles for precise time-to-call calculations
  • Uses track-specific geometry (Monaco circuit configuration) loaded from JSON
  • Broadcasts computed timer states (GREEN/AMBER/RED/LOCKED_OUT) to multiple clients via WebSocket pub-sub
  • Resilient connection handling with graceful handshake error suppression

2. Rival Boxing Predictor (Rust + PyTorch)

  • Rust HTTP service serving a Quantile Regression Deep Q-Network (QR-DQN) via Pytorch
  • Trained on 2023 Monaco GP data using FastF1 telemetry archives
  • Features engineered from 80+ attributes: tire compound age, stint length, lap time trends (slope, variance), fuel load estimates, track status, and positional context
  • Model outputs calibrated probabilities (p2, p3) for pit stops within 2-3 laps
  • Handles flat JSON feature maps with dynamic ordering to match training metadata

3. Machine Learning Training Pipeline (Python)

  • QR-DQN trainer with reward shaping for imbalanced classification (pit stops are rare events)
  • Implements positive oversampling and F2-score optimization (recall-biased) to catch more pit stops early
  • Platt scaling for probability calibration
  • Data preprocessing pipeline: parses FastF1 session laps, computes rolling statistics, handles track status (yellow flags, safety cars)
  • Exports models to both PyTorch and TorchScript formats for cross-platform inference

4. Telemetry Feed (Python + FastF1)

  • Streams historical race telemetry at 5Hz to simulate live conditions
  • Integrates SpeedProfileCalculator: maintains sliding window of 50 recent samples, generates lookahead speed profiles for 500m distance
  • Converts lap distance + speed data into structured JSON payloads
  • Robust reconnection logic with keepalive pings to maintain WebSocket stability

5. Bridge Service (Node.js + WebSocket)

  • Bidirectional message broker connecting ML predictions to frontend
  • Receives HTTP POST updates from prediction feeder, stores latest state per driver
  • Broadcasts full state snapshots on new WebSocket connections for instant UI sync
  • Manages multiple concurrent frontend clients with health check endpoint

6. React Frontend (Vite + WebSocket)

  • Real-time dashboard with custom hooks (useWebSocket, usePitProbabilities, useRouter)
  • Components: PitProbabilities panel with color-coded bars (red/orange/green), TrackMap visualization, RaceInfo display
  • Live-updating probability trends with up/down/stable indicators
  • Responsive design with CSS modules for component isolation

Data Pipeline Flow

FastF1 → Telemetry Feed (Python) → Pit Timer Backend (Rust WebSocket)
                                  ↓
                            Frontend (React)
                                  ↑
FastF1 → Feature Engineering → RT Predictor (Rust+PyTorch) → Bridge Service → Frontend

Tech Stack

  • Languages: Rust, Python, JavaScript (Node.js + React)
  • ML: PyTorch, XGBoost (QR-DQN, boosting models)
  • Data: FastF1 (Formula 1 telemetry API), Redis (artifact caching)
  • DevOps: Docker, Docker Compose, multi-stage builds, ONNX, Redis
  • Real-time: WebSocket (tokio-tungstenite, ws library), async/await patterns
  • Frontend: Vite, React 18, custom CSS

WHY?

Redis is the best low-latency solution with it in-memory (bus and cache) that doesn't let processes stored on disk instead. Saving model predictions in ONNX allows GPU scalability and emulates HPCs in a local setting. With a bottleneck on Redis Insight and dockerfile, we stream data within ~5-30 Hz, that's 5-30 rows in 100 milliseconds, giving a real edge to the drivers.

Key Technical Innovations

  • Hybrid speed modeling: Combines instantaneous velocity with integrated profiles using trapezoidal rule for meter-level accuracy
  • Replay-safe RL training: QR-DQN handles variable-dimension replay buffers from multi-session data
  • Calibrated probabilities: Platt scaling post-processing ensures predicted probabilities match observed frequencies

Challenges we ran into

1. PyTorch Deployment in Rust

Getting TorchScript models to run in a Rust service was non-trivial. The tch-rs library requires exact libtorch versions, and we had to carefully manage the build process with Docker multi-stage builds to include the correct shared libraries without bloating the image.

2. Imbalanced Dataset

Pit stops are rare events (~2-3 per race per driver). Standard classification metrics were misleading. We solved this with:

  • Positive class oversampling during training
  • F2-score optimization (weights recall 4x more than precision)
  • Custom reward shaping in the RL environment to incentivize early detection

3. Real-time Speed Integration

Initial time-to-call estimates using instantaneous speed were inaccurate during braking zones. We implemented a sliding-window speed profile calculator with trapezoidal integration, which required careful handling of edge cases (insufficient samples, monotonicity, boundary conditions).

4. WebSocket Stability

We encountered frequent disconnections during development. Solutions included:

  • Implementing keepalive pings with configurable timeouts
  • Graceful handling of protocol errors (distinguishing between real clients and TCP probes)
  • Broadcast channels to fan out messages efficiently without blocking

5. Feature Engineering at Scale

Extracting meaningful features from raw FastF1 data (lap times, tire age, fuel load) required domain knowledge of F1 strategy. We built a comprehensive pipeline that:

  • Computes rolling statistics (mean, slope, variance) over recent laps
  • Encodes categorical variables (tire compounds, track status)
  • Handles missing data and race incidents (retirements, pit errors)

Accomplishments that we're proud of

Built a fully functional real-time race engineering dashboard that synchronizes telemetry, ML inference, and UI updates with low latency

Developed a deterministic Pit-Entry "Call-Point" Timer that accurately predicts the last safe "Box now" moment validated on simulated laps

Trained and deployed a live Rival "Likely Boxing" Classifier using Python, ONNX, Redis, and Rust

Integrated Rust backend with a React + TypeScript + Tailwind frontend, achieving smooth live data visualization and intuitive race-engineer UI

Implemented bi-directional WebSocket communication for continuous updates between telemetry feeds and dashboard visuals

Collaborated effectively as a cross-disciplinary team, combining systems programming, machine learning, and racing strategy

Delivered an end-to-end prototype in under 36 hours, from raw telemetry data to intelligent pit-decision support

What we learned

Rust for real-time systems: None of us had prior Rust experience, so we learned how to design a memory-safe and low-latency system using ONNX and Redis

Machine learning deployment on the edge: We explored how to run ONNX models efficiently for live inference, reducing cold-start time and optimizing for minimal compute overhead

Building modern UIs: Gained hands-on experience designing and implementing a real-time dashboard using React + TypeScript and Tailwind, focusing on clarity, responsiveness, and usability

Real-world racing dynamics: We gained insight into how small delays in telemetry or human communication can change race outcomes, and how data-driven engineering can close that gap

Cross-functional collaboration: Building PitWit required combining knowledge from motorsport strategy, data science, and systems engineering, helping us think like both engineers and strategists

What's next for PitWit

Combine Both Systems for Strategic Insights: Integrate the Call-Point Timer and Rival Boxing Classifier to identify the optimal pit window in real time.

Expand Integration with Strategy Models: Connect to tire degradation models, fuel load estimators, and weather inputs for a holistic pit strategy engine.

Enhance User Experience & Visualization: Implement configurable thresholds, dynamic track maps, and interactive telemetry overlays for greater clarity and control.


Share this project:

Updates