Inspiration

Emergencies often go unnoticed or unreported until it's too late — seconds can make the difference between life and death. Whether it's a car crash in a busy intersection, someone collapsing due to a heart attack, or a fire starting in a crowded space, timely response is critical. With the increasing presence of surveillance and personal video streams, we saw an opportunity to leverage artificial intelligence to automate emergency detection and dramatically reduce emergency response times. This inspired the creation of TED.ai — Total Emergency Detection AI.


What it does

TED.ai is an AI-powered emergency detection system that analyzes live or remote video feeds to identify critical incidents like fires, dangerous activity, car crashes, and medical emergencies in real time. By combining computer vision with intelligent alerting, it can automatically notify first responders to reduce response time and potentially save lives. It's designed to be fast, scalable, and deployable in public spaces, vehicles, or smart surveillance systems.

When an emergency is detected, TED.ai automatically triggers alerts to the appropriate first responders using configurable mechanisms such as:

  • Text messages
  • Email
  • Webhooks to dispatch systems or third-party apps

Or, most importantly, through our integrated dashboard!

The best part about TED.ai is that it works with any pre-existing camera system. All you need is a video feed and a simple server for TED.ai to work. That means traffic light cameras, security cameras, and even your personal phone can be part of the TED.ai network!


How we built it

We built TED.ai as a multi-component pipeline:

  • Multi-Modal Backend Pipeline: Developed in Python using Flask, the primary backend server ingests video stream data from multiple camera feeds and distributes it to specialized processing servers. Each server runs a dedicated model—such as for fire detection or medical emergencies—allowing simultaneous and efficient event detection.
  • Computer Vision Models: In each distributed server, we fine-tuned and leveraged object detection and action recognition models (e.g., YOLOv5, Google MobileNet SSD, and PoseNet) to identify visual patterns associated with emergencies. We ran some models using GPU-accelerated inference (with Torch/ONNX) to maintain high performance under real-time constraints. We also chose specific models like Google's MobileNet SSD model due to the efficiency and portability that comes with a single shot detection model.
  • Temporal Analysis: To prioritize urgent scenarios like medical emergencies, we implemented a split processing pipeline. This ensures efficient resource allocation, as medical events require immediate detection, whereas incidents like fires or car crashes are less time-sensitive and can be processed with slight delays.
  • Alert System: In addition to the dashboard that first responders use, we also have configurable endpoints to allow integration with SMS, email, or other emergency management platforms.

Challenges we ran into

  • False Positives: Emergency-like motions (e.g., sitting quickly) sometimes triggered alerts. We had to fine-tune thresholds and add temporal filtering.
  • Model Latency: Real-time processing at scale required optimization — we experimented with batching and frame sampling strategies as mentioned above in the temporal analysis section.
  • Limited Training Data: Some edge cases (e.g., seizures) had scarce publicly available data, which made generalization harder.
  • Alert Fatigue: Balancing sensitivity with relevance was tricky — we had to ensure the system is actionable without being annoying.

Accomplishments that we're proud of

  • Built a functioning prototype that can detect a wide range of emergency scenarios in real time
  • Leveraged multiple ML models in a coherent and efficient pipeline
  • Created an efficient distributed server architecture and pipeline
  • Developed a responsive alert system that works with minimal configuration
  • Successfully simulated real-life emergencies and verified accurate detection

What we learned

  • Being able to detect certain emergencies is hard especially when the frames you feed into the model isn't high enough quality
  • Having a good distributed server architecture is important to not overwhelm any particular server and to ensure that the load is distributed evenly.
  • Building systems for real-world emergencies means prioritizing reliability of the product, not just performance due to the nature of these emergencies.
  • Integrating multi-modal detection—such as motion, object, and fire recognition—is challenging due to overlapping signals that can interfere with one another.

What's next for TED.ai

  • Expand training datasets to improve accuracy across diverse environments and cultures
  • Integrate audio detection for screams, alarms, or explosions
  • Build a mobile SDK to allow integration with phone or dashcam apps
  • Work with companies to integrate with security cameras or other camera feeds
  • Add explainable AI components to clarify why an emergency was flagged
  • Partner with public safety organizations for pilot testing and real-world validation
  • Improve privacy features like anonymized person detection and local-only processing

Built With

  • gemini
  • mobilenet
  • next.js
  • tensorflow
Share this project:

Updates