OpenDisaster: Real-Time Disaster Simulation With AI Agents

A 3D, city-scale disaster simulator that turns real locations into interactive crisis scenarios with autonomous AI agents, analytics, and replays.


The Problem

Emergency training tools are either too abstract to feel real or too complex to use in fast-moving scenarios. Most systems:

  • don't model real neighborhoods,
  • lack believable human behavior and feedback loops.

As a result, planners and responders have to guess how people behave, how damage compounds, and how to evaluate outcomes under pressure.


The Solution

OpenDisaster turns any real-world location into a live, controllable disaster simulation with AI-driven agents, realistic physics, and actionable analytics.

Real-World Scene Reconstruction

  • OpenStreetMap data for buildings, roads, parks, water, and trees
  • OpenTopo elevation for terrain height and slope
  • Google Maps satellite imagery projected onto ground and rooftops

Multi-Disaster Engine

  • Tornado — EF-scale wind field, debris physics, building damage and collapse
  • Earthquake — Magnitude-based shaking, structural damage modeling
  • Flood — Shallow-water simulation with rising depth and affected radius
  • Fire — Stochastic spread with smoke, embers, and building ignition

AI-Driven Agents

  • 8 named agents with distinct personalities and memory
  • VLM perception: each agent's first-person POV is captured and sent to a vision-language model via WebSocket
  • VLM returns movement decisions (walk, run, flee); agents act autonomously
  • Graceful degradation: no API key = agents auto-wander without perception

Analytics + Replay

  • Damage events and agent actions recorded over time
  • Replay viewer with VLM decision logs and agent POV streams
  • Snapshot gallery of agent perspectives during the simulation
  • Post-simulation statistics with heatmap overlays

Interactive Controls

  • Scenario selection UI (tornado / earthquake / flood / fire)
  • Per-scenario configuration panels (EF scale, magnitude, fire size, flood depth)
  • On-screen spawn/stop controls and keyboard shortcuts

How It Works

  1. User enters an address or coordinates and selects an area size
  2. The app fetches OSM building/road/park data and USGS elevation
  3. A 3D scene is built with extruded buildings, terrain, trees, and water
  4. User selects and configures a disaster scenario
  5. Disaster physics run per frame; agents perceive their surroundings via first-person camera captures
  6. Frames are sent to a VLM for perception-based decision making
  7. Agent actions are applied (movement, collision avoidance, danger zone avoidance)
  8. Outcomes are logged for replay and post-simulation analysis

Architecture

Backend

  • Bun server for fast local runtime and hot reload
  • Overpass API pipeline for categorized OSM layers
  • USGS elevation API for terrain heightmaps
  • Satellite imagery proxy (/api/satellite)
  • VLM proxy for agent perception (Featherless AI)
  • Audio narration via ElevenLabs TTS

Frontend

  • Three.js for 3D rendering
  • Vanilla HTML/CSS/TypeScript UI (no framework)
  • Terrain + building mesh generation from GeoJSON
  • Dynamic materials for satellite imagery projection onto rooftops
  • Particle effects via three.quarks (fire, debris)

ECS Core

  • bitecs v0.4 with Structure-of-Arrays TypedArrays
  • Fixed-timestep update loop (1/60s)
  • Event bus for disaster events (FIRE_SPREAD, AGENT_DEATH)

Featherless AI

  • Featherless AI is the core intelligence behind our agents. We use their OpenAI-compatible API to run google/gemma-3-27b-it as a vision-languag e model (VLM) -- every second, each agent's first-person POV is captured as a screenshot and sent to Featherless for visual understanding. The m odel describes what the agent sees and flags dangers like fire, smoke, or collapsing structures. These observations directly drive agent behavior: they flee, seek shelter, or help others based on what they actually see through Featherless inference.

  • We also use Featherless a second time during replay generation -- the LLM generates first-person dialogue for each agent based on their recorded o bservations and actions, producing natural spoken lines that bring the simulation to life.

    • To handle real-time perception for 8 simultaneous agents, we distribute VLM calls across up to 4 Featherless API keys with a pooled concurrency sy stem.

ElevenLabs

  • ElevenLabs gives our agents a voice. After a simulation completes, the replay system converts each agent's Featherless-generated dialogue into spe ech using ElevenLabs' eleven_v3 TTS model. Each agent is assigned a consistent voice from a pool of 8 distinct ElevenLabs voices (4 male, 4 fema le), so they sound the same across replays.

    • The key feature is emotion-aware voice generation: when Featherless flags a DANGER observation, we dynamically shift the ElevenLabs voice se ttings -- dropping stability to 0 and cranking up style intensity -- and inject audio tags like [SCARED] and [breathing heavily] to produce ge nuinely panicked vocal performances. Normal observations get calm, natural delivery. The result is a cinematic replay where you hear agents go fro m casual conversation to terrified screaming as disaster unfolds around them.

Challenges

  1. Rendering large city meshes while maintaining frame rate
  2. Balancing tornado debris particle count vs. performance
  3. Mapping satellite imagery cleanly onto rooftops and ground
  4. Agent collision avoidance with dynamic obstacles
  5. Coordinating VLM perception latency with real-time simulation

Key Features

  • Real-world locations reconstructed from OSM + elevation + satellite data
  • Four disaster types with configurable parameters
  • Autonomous AI agents with VLM-powered perception
  • Replay system with POV video and VLM decision logs
  • Post-simulation analytics and heatmap overlays

Tech Stack

  • Runtime: Bun
  • 3D Engine: Three.js, webGPU
  • ECS: bitecs
  • Data Sources: OpenStreetMap, OpenTopo, Google Maps Satellite
  • AI: Featherless AI (VLM), ElevenLabs (TTS)
  • Language: TypeScript

Created By

  • Chris Chang
  • Theo Chapman
  • Alex Jerpelea
  • Anirudh Sridharan

Built With

  • bitecs-(ecs)
  • bun
  • elevenlabs
  • featherless-ai-(gemma-3-27b-it-vlm)
  • openstreetmap/overpass-api
  • three.js
  • three.quarks
  • typescript
  • usgs-elevation-api
  • websockets
Share this project:

Updates