Inspiration

Every emergency manager's nightmare: a hurricane is bearing down, and your carefully crafted messages aren't reaching the people who need them most.

In a crisis, communication saves lives. Yet, how do you prepare for the unpredictable human element? How do you know if your evacuation orders will be followed by an elderly person with limited mobility, a low-income family who can't afford to miss work, or a recent immigrant who doesn't speak the language?

Traditional tabletop exercises are static and can't replicate this complex human tapestry. They are a blunt instrument in a world that requires surgical precision.

We asked ourselves: What if you could simulate an entire community's response to a crisis in real-time? What if you could see how your decisions as an emergency manager ripple through different demographics, and have conversations with AI-powered personas to understand why they chose to stay or go?

That's when Emergent was born.

What it does

Emergent is an AI-powered crisis simulation platform that allows emergency managers to test their plans against a dynamic, virtual community of hundreds of intelligent personas.

Core Features

  1. Dynamic, AI-Powered Scenarios

    • Generative Events: Utilizes Google Gemini to generate realistic disaster scenarios and narrative injects, creating a unique exercise every time.
    • Structured Timeline: Events unfold over a 13-phase timeline, from the initial warning to the long-term recovery, mirroring a real-world crisis.
  2. Deep Persona-Based Modeling

    • 50 Unique Personas: The simulation is populated by 50 distinct personas, each generated by Google's Agent Development Kit (ADK) with a unique background, income, living situation, resources, and constraints.
    • Realistic Behavior: Personas react dynamically to events based on their archetype. A retired homeowner in a flood zone behaves differently than a student in a downtown apartment.
  3. AI-Powered Communication & Reporting

    • Situational Chatbot: Engage with a Gemini-powered AI to get context at certain points in simulations, insights for next actions, and spotlights on concerning trends.
    • Comprehensive After-Action Reports: Instantly generate detailed reports post-simulation, including an executive summary, deep-dive analytics, and an interactive timeline replay to dissect every decision.

How we built it

Architecture Overview

 ┌─────────────────────────────────────────────────────────┐
  │              Frontend (Next.js 14, React)               │
  │  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌──────────┐ │
  │  │React     │  │Next.js   │  │TypeScript│  │Tailwind  │ │
  │  │Leaflet   │  │Pages     │  │          │  │CSS       │ │
  │  └──────────┘  └──────────┘  └──────────┘  └──────────┘ │
  └─────────────────────────────────────────────────────────┘
                                │
                                ▼
  ┌─────────────────────────────────────────────────────────┐
  │                   Backend API (Python)                  │
  │  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌──────────┐ │
  │  │FastAPI   │  │Pydantic  │  │Uvicorn   │  │ADK       │ │
  │  │Endpoints │  │Models    │  │Server    │  │Integration││
  │  └──────────┘  └──────────┘  └──────────┘  └──────────┘ │
  └─────────────────────────────────────────────────────────┘
                                │
                                ▼
  ┌─────────────────────────────────────────────────────────┐
  │                  AI & Services Layer                    │
  │  ┌──────────────┐  ┌──────────────────┐  ┌───────────────┐ │
  │  │Google ADK    │  │Google Gemini     │  │ElevenLabs API │ |
  │  │- Persona Gen │  │- Scenario Gen    │  │- Voice Gen    │ |
  │  │- Parallel    │  │- Chat Logic      │  │               │ │
  │  │  Agents      │  │                  │  │               │ │
  │  └──────────────┘  └──────────────────┘  └───────────────┘ │
  └─────────────────────────────────────────────────────────┘


Technical Implementation

  • Frontend

    • Next.js 14 & TypeScript: A modern, type-safe foundation for a responsive and scalable UI.
    • React Leaflet: Provided the interactive GIS map, allowing for dynamic visualization of personas, events, and infrastructure.
    • Tailwind CSS & shadcn/ui: A utility-first CSS framework combined with beautifully designed components for a clean, intuitive, and professional interface.
  • Backend & AI Pipeline Architecture

    • Python & FastAPI: A high-performance backend to serve the simulation data and manage the AI agents.
    • Google Agent Development Kit (ADK): The core of our simulation. We used the ADK to programmatically generate 50 LlmAgent instances.
    • Persona Generation Pipeline: Each persona is created with a detailed prompt that includes their archetype (e.g., "low-income, high-risk") and the entire 13-phase hurricane scenario. This forces the agent to generate a full, coherent narrative of its reactions over the course of the disaster.
    • Parallel Processing: All 50 agents are run simultaneously using the ADK's ParallelAgent, allowing us to simulate an entire community's response with incredible efficiency.

Challenges we ran into

  1. Optimizing the Agent Layer The original concept of our application was a turn-by-turn approach to AI persona response generation. We found that this method had too much latency and lead time for results. To improve it, we ended up utilizing Parallel Agents as part of the ADK and additionally batching multiple "steps" of a simulation all at once to reduce the repetitive generation of responses.

  2. Setting up the FASTApi backend w/ ADK When setting up the FASTApi backend to interface our frontend and backend, we ran into many issues due to lack of documentation and also because of certain bugs in the actual adk package. With some thorough research and perseverance we were able to resolve this issue and create a performant backend that interfaced well with our frontend.

  3. Creating authentic personas We wanted the personas to be a diverse set of unique personas in order to give the most valuable insights for the emergency managers. To do this, we had to research and set "weights" and "probability" of different characteristics, augmented by location to create the most useful set of personas.

Accomplishments that we're proud of

  1. From Static Plan to Living Simulation in Seconds We've transformed the static, text-based emergency plan into a living, breathing simulation. What would normally be a resource-intensive, multi-hour exercise can now be run, analyzed, and re-run in minutes.

  2. Giving a Voice to the Vulnerable Our proudest accomplishment is not just the technology itself, but its purpose. Emergent gives a voice to the most vulnerable members of a community, allowing planners to see the crisis through their eyes and build more empathetic, effective response strategies.

  3. A Beautiful, Intuitive, and Powerful UX We successfully translated a mountain of complex simulation data into an interface that is not only powerful but also clean, intuitive, and genuinely engaging to use. The interactive map and timeline make understanding the data feel less like work and more like discovery.

What we learned

  • Technical Insights

    • Structured Data is Key: For complex AI tasks, enforcing a strict data schema (like our Pydantic models) is more effective than relying on clever prompting alone.
    • The Power of Parallelism: Google's ADK and its ParallelAgent were game-changers, allowing us to scale our simulation in a way that would have been impossible otherwise.
    • Frontend Performance Matters: Even with a powerful backend, the user's perception of performance is dictated by the frontend. Efficient state management and data fetching are critical.
  • Domain Insights

    • Human Behavior is Nuanced: The simulation revealed surprising and non-obvious behaviors, highlighting how factors like distrust in authorities, economic pressure, or past trauma can dramatically influence a person's decisions.
    • One-Size-Fits-All Fails: The biggest takeaway is that generic, one-size-fits-all communication strategies are doomed to fail. Effective communication must be tailored to the specific needs and circumstances of the community.

What's next for Emergent

Immediate Roadmap (Next 3 Months)

Enhanced AI Capabilities:

  • Custom Persona Upload: Allow managers to upload their own demographic data to create even more realistic, location-specific personas.
  • Census Data Integration: Feed real-world census data directly into the model to generate statistically accurate community simulations that reflect actual population distributions, income levels, and demographic compositions.
  • Multilingual Communication: Expand language support through ElevenLabs integration, enabling personas to respond based on their native language and preferred communication modality (text, voice, visual aids), creating a more authentic representation of diverse communities.

Professional Validation:

  • Partner with emergency management professionals to review simulation accuracy, refine persona behaviors, and ensure the platform meets real-world operational standards.

Long-term Vision (Next Year)

  • Agent-to-Agent Communication: Implement persona-to-persona interactions, allowing community members to influence each other's decisions through social networks, family dynamics, and neighborhood relationships—mirroring how information and behavior actually spread during crises.
  • Multi-Disaster Simulation: Expand beyond hurricanes to include wildfires, earthquakes, floods, public health crises, and man-made disasters, each with disaster-specific behavioral models and response patterns.
  • The Standard for Emergency Training: Our ultimate goal is to make Emergent the global standard for emergency preparedness training, making our communities safer and more resilient through data-driven, human-centered crisis simulation.

Built With

+ 5 more
Share this project:

Updates