Inspiration

What it does

YELLOWLITE

The idea came from a moment every driver knows: you're moving at 40 mph, the light turns yellow, and for a split second your brain races through an impossible calculation. Do you have enough room to stop? Enough time to clear? You make a gut call and hope.

We looked into it and found that traffic engineers have a name for the worst-case scenario: the dilemma zone; its a band of distance and speed where neither stopping nor going is clearly safe. At 45 mph on a typical urban road, that zone spans roughly 150–220 feet. Half a city block, and a large portion of drivers passes through it blind. So we wanted to build something that could see it coming.

YELLOWLITE is a real-time yellow light decision engine that runs entirely in your phone's browser. Point your camera at an intersection, and the app detects the light, measures your distance to it, pulls your speed from GPS, and runs the physics, returning one of three verdicts: STOP, GO, or DILEMMA. At the end of a trip, GPT-4o analyzes every yellow light encounter and grades your driving A through F.

The core of the app is a Monte Carlo simulation that runs 1,000 randomized trials every time a yellow light is detected. Each trial independently samples:

Road friction (± variance for surface inconsistency) Driver reaction time (±0.3s based on alertness level) Distance estimate (±20% for perception error) Yellow signal duration (±0.3s for timing variance) For each trial, two physics checks run in parallel.

Stopping distance uses grade-adjusted kinematics:

$$d_{stop} = v \cdot t_r + \frac{v^2}{2 \cdot \mu_{eff} \cdot g}$$

where $\mu_{eff} = \mu_{road} + \frac{\theta}{100}$ accounts for road grade $\theta$ — uphill assists braking, downhill opposes it.

Clearance time uses the MUTCD (Manual on Uniform Traffic Control Devices) standard yellow-timing formula:

$$t_y = t_r + \frac{v}{2a} + \frac{W + L}{v}$$

where $W$ is intersection width, $L$ is vehicle length, and $a = 10 \text{ ft/s}^2$ is the standard deceleration rate.

After 1,000 trials, the verdict is the most common outcome. Confidence is reported using the Wilson score interval at 95% — more statistically honest than a normal approximation at the tails:

$$\hat{p}{low}, \hat{p}{high} = \frac{\hat{p} + \frac{z^2}{2n} \mp z\sqrt{\frac{\hat{p}(1-\hat{p})}{n} + \frac{z^2}{4n^2}}}{1 + \frac{z^2}{n}}$$

The Vision Pipeline Traffic light detection runs in two stages entirely on-device using TensorFlow.js:

COCO-SSD localizes traffic lights in the camera frame with a bounding box HSV color classification reads the active light state inside that box — splitting it into thirds and scoring each zone's hue, saturation, and brightness against tuned thresholds This two-stage approach eliminates most false positives. The model only analyzes color inside a confirmed traffic light bounding box, so yellow cars, signs, and street lights don't trigger verdicts.

Distance is estimated using the pinhole camera model:

$$d = \frac{W_{real} \cdot f}{W_{px}}$$

where $W_{real}$ is the known housing width (3.5 ft), $f$ is focal length estimated from the video resolution, and $W_{px}$ is the bounding box width in pixels.

GPS & Trip Tracking Speed comes from the browser's Geolocation API (position.coords.speed), converted from m/s to mph. Every yellow light encounter is logged with speed, distance, verdict, confidence, road condition, and vehicle type.

Trip arrival is detected using the Haversine formula:

$$d = 2R \arcsin\left(\sqrt{\sin^2!\left(\frac{\Delta\phi}{2}\right) + \cos\phi_1\cos\phi_2\sin^2!\left(\frac{\Delta\lambda}{2}\right)}\right)$$

When the driver gets within 150 meters of their destination, the trip closes automatically and GPT-4o grades the session.

Early versions used a naive stopping distance formula and a rough yellow time estimate. The results felt right but weren't grounded in engineering standards. Switching to the MUTCD formula required understanding why each term exists. The $\frac{W+L}{v}$ term, for example, is easy to miss. It accounts for the fact that the vehicle's front bumper clearing the stop line isn't enough. The entire vehicle has to clear the intersection. For a truck, that's 25 feet more distance at full speed.

False positive detection The first camera system flagged yellow on anything bright — turn signals, taxi cabs, construction signs. Every detection triggered a verdict, which destroyed trust in the app instantly.

The two-stage ML pipeline solved this, but getting the HSV thresholds right took significant iteration. Yellow light color under different sky conditions, times of day, and camera sensors varies more than expected. The final classifier uses adaptive brightness thresholds based on the mean luminance of the bounding box rather than fixed values.

Distance estimation drift Pinhole model distance works well between roughly 50–300 feet. Beyond that, the bounding box becomes too small and pixel rounding causes significant error. Closer than 50 feet, the box fills most of the frame and the model overestimates distance.

The fix was a hybrid approach: fresh camera readings are blended with GPS dead-reckoning — projecting forward from the last known good reading using current speed — to smooth out jitter and fill in the gaps when detection confidence drops.

Sensor timing mismatches GPS updates at roughly 1 Hz. Camera runs at 5 fps. The Monte Carlo engine needs both at the same moment. Early builds had a race condition where a verdict could be calculated with a speed reading that was 800ms stale — enough error at highway speeds to flip a STOP to a GO.

The solution was a simple last-known-value cache: GPS always writes to a shared state object, and the camera callback reads from it synchronously. The HUD also displays staleness indicators so the driver can see if GPS has dropped.

What I Learned Building YELLOWLITE taught me that physics is the easy part. The formulas are well-established. The hard problems were:

Making uncertainty visible rather than hiding it behind a false point estimate Designing a UI that communicates urgency in under one second Getting real sensor data — camera, GPS, microphone — to cooperate on a device you have no control over Understanding that "good enough for a demo" and "trustworthy enough to act on while driving" are completely different bars The dilemma zone concept stayed with me throughout. It reframes driver error: sometimes there is no right answer. The infrastructure failed you before you even reached the light. That's worth knowing.

How we built it

Challenges we ran into

Accomplishments that we're proud of

What we learned

What's next for YellowLite

Built With

Share this project:

Updates