We will be undergoing planned maintenance on January 16th, 2026 at 1:00pm UTC. Please make sure to save your work.

Inspiration

The idea for Rescue Assistant came from a simple but uncomfortable realization: in an emergency, most apps expect you to think clearly, type accurately, and navigate menus, exactly when you’re least able to do so. We were inspired by real-life situations where people needed help immediately but were overwhelmed, panicked, or physically unable to interact with their phones in conventional ways. We wanted to build something that reduces that friction to the bare minimum: press, speak, and get help.

What it does

Rescue Assistant is a voice-first emergency assistant designed for high-stress situations. A user presses and holds the microphone, describes what’s happening in their own words, and the app transcribes their speech using real cloud-based speech recognition. The transcript is then analyzed by an AI model that responds with calm, concise, and actionable guidance. The app also stores emergency-related data locally and allows escalation through an SOS call to a saved emergency contact.

Rescue Assistant designed to reduce friction in high-stress moments by enabling hands-free interaction and delivering AI-generated responses through a natural voice interface.

How we built it

We built Rescue Assistant as a Flutter mobile application with a strong focus on reliability and simplicity. Audio is recorded directly on the device and processed using Google Cloud Speech-to-Text to ensure accurate, real-world transcription. The resulting text is sent to an AI reasoning model to generate context-aware emergency guidance. Local persistence is handled using Hive, allowing user profiles, emergency contacts, and interaction history to remain available even across sessions. The interaction model intentionally mirrors familiar “press-and-hold” voice messaging patterns to make it intuitive under stress.

To complete the conversational experience, responses are converted back into natural-sounding speech using ElevenLabs, allowing users to receive guidance audibly rather than relying only on text. Local persistence is handled with Hive to store essential profile and emergency data on-device.

Challenges we ran into

One of the biggest challenges was ensuring the voice pipeline was fully real and not simulated. This required careful handling of audio codecs, permissions, and cloud authentication. Integrating multiple services under time pressure also exposed SDK and compatibility issues that had to be resolved quickly. Balancing technical correctness with a clean, understandable user experience for emergency scenarios was another constant challenge.

Accomplishments that we're proud of

We are proud that Rescue Assistant goes beyond a conceptual demo. It uses real speech recognition, real AI reasoning, and real local persistence, no stubs or placeholders. The app can be picked up by a judge, spoken to naturally, and produce meaningful results. Achieving an end-to-end, production-aligned pipeline within a hackathon timeframe is a major accomplishment.

What we learned

This project reinforced how critical simplicity and reliability are when building for emergencies. We learned a lot about mobile audio processing, cloud-based AI integration, and the importance of designing for users under stress rather than ideal conditions. Most importantly, it highlighted how AI can be most impactful when it quietly supports humans instead of overwhelming them.

What's next for Rescue Assistant

Future plans include improving offline resilience, adding multilingual support, expanding SOS options beyond phone calls, and tailoring responses to specific regions and emergency types. The long-term goal is to evolve Rescue Assistant into a dependable companion that people can trust when they need help the most.

Built With

Share this project:

Updates