Inspiration

The inspiration for HyperCure came from a very specific, high-stakes moment in Formula 1 history: the early 2024 "Chassis Crisis" at Williams Racing. The team arrived in Melbourne without a spare car, a logistical nightmare caused not by poor aerodynamic design, but by a failure in manufacturing assurance.

In the world of high-performance composites, carbon fiber components are cured inside "Black Box" autoclaves at high heat (180 degree celsius) and pressure (6 bar). A deviation of just 2 degree celsius or a pressure drop of 0.5 bar during the critical "gelation" phase can create microscopic voids (porosity), rendering a $100,000 - $500,000 F1 front wing, chassis or survival cell unsafe - carbon fiber monocoque (the cockpit protecting the driver).

I realized that the industry standard legacy "Issue Tracking" databases was fundamentally flawed. By the time an engineer reads a ticket about a sensor failure, the resin has already gelled, and the part is ruined. I wanted to shift the paradigm from "Data at Rest" (databases) to "Data in Motion" (streaming) to save these critical components in milliseconds, not hours.

What it does

HyperCure is an AI-native industrial monitoring platform that prevents catastrophic manufacturing failures. It functions as a central nervous system for the factory floor:

Real-Time Digital Twin: It renders a high-fidelity 3D visualization of the F1 chassis in the browser. Using WebGL thermal shaders, the 3D model changes color dynamically (Blue --> Green --> Red) based on live temperature telemetry.

Physics Simulation: It runs a backend physics engine that simulates the curing cycle of Hexcel 8552 carbon fiber, capable of injecting "Silverstone Nightmare" scenarios like vacuum leaks, pressure drops, and exothermic runaways.

Cognitive Anomaly Detection: Instead of simple threshold alarms, it uses Google Gemini 2 Flash to analyze the context of the physics. It determines if a pressure drop is a harmless fluctuation or a critical failure based on the resin's current state (Liquid vs. Solid).

Multi-Channel Intervention: When a risk is detected, it doesn't just log it. It announces the failure via ElevenLabs synthetic voice for the operator and sends debounced Twilio SMS alerts to the site supervisor.

How I built it

I architected the system as an event-driven microservices pipeline, deployed entirely on Google Cloud Run.

The Simulator (Python): I built a physics engine that generates telemetry (Temperature, Pressure, Vacuum) at 4Hz. This simulator pushes data to Confluent Cloud Kafka using a strict Avro schema to ensure data integrity.

The Nervous System (Confluent & Flink): I utilized Apache Flink to perform real-time calculus on the stream. I implemented windowing logic to calculate the heating rate (dT/dt) to detect exothermic events where the rate exceeds 3 degree celsius/min.

The Brain (Google Vertex AI): I integrated Gemini 2 Flash via the direct REST API. To make the AI "physics-aware," I implemented a RAG-lite approach, grounding the model with the specific material constraints of the Hexcel 8552 Technical Data Sheet.

The Frontend (Next.js 16 + WebGL): I built the dashboard using Next.js 16. For the 3D visualization, I used React Three Fiber. I wrote a custom GLSL fragment shader that binds directly to the WebSocket stream, interpolating mesh colors in real-time based on the incoming temperature data.

Challenges I ran into

The "Alert Storm" & Quotas: During testing, the simulator runs faster than the AI API rate limits allow. I hit 429 Resource Exhausted errors immediately. I had to implement a strict Debouncing and Log Dampening logic in the backend to throttle requests to Gemini and Twilio (1 alert per 5 minutes) without losing the critical "First Alert" signal.

Physics vs. LLMs: Getting an LLM to understand phase transitions was difficult. Initially, Gemini flagged any pressure drop as critical. I had to engineer the system prompt to distinguish between the Liquid Phase (T < 180 degree celsius, Critical) and the Solid Phase (T > 180 degree celsius, Low Risk).

State Synchronization: Handling "Ghost Alerts" was tricky. If the simulator reset to "Normal," the frontend would sometimes still show the previous error state. I had to implement a deterministic fallback and state-clearing mechanism to ensure the UI matched the physics engine exactly.

Accomplishments that I'm proud of

Latency Reduction: I achieved an end-to-end latency of < 500ms from the simulator generating a "Vacuum Leak" to the 3D model turning yellow on the dashboard.

Resilience: The system includes a Deterministic Fallback. If the AI service goes down or hits a rate limit, hardcoded physics rules take over immediately, ensuring safety protocols are never lost.

Visual Fidelity: The custom WebGL Thermal Shader is not just a visual trick; it accurately represents the thermal gradient of the part, making complex data intuitive for the operator.

What I learned

Data in Motion is King: For manufacturing, static databases are graves for data. Streaming architecture is the only way to achieve actionable safety.

Context is Everything: A sensor reading of "4 Bar" means nothing without context. By combining real-time streams with AI that "knows" the material properties (Tg, Gel Time), we turn raw data into manufacturing intelligence.

AI Needs Guardrails: Generative AI is powerful, but in an industrial setting, it must be constrained by physics. Grounding the model with specific technical documentation (Hexcel 8552 specs) was non-negotiable.

What's next for HyperCure

Edge Deployment: Moving the inference engine from the cloud to the edge (on-premise) to further reduce latency and reliance on internet connectivity.

Augmented Reality (AR): Projecting the 3D thermal overlay directly onto the physical autoclave using AR glasses for the shop floor operators.

Predictive Maintenance: Using the historical Kafka streams to predict when the vacuum pumps are likely to fail before the cure cycle even begins.

Real-Time Device Command & Control with EMQX and Confluent Cloud

Built With

Share this project:

Updates