Inspiration

Millions of people worldwide face moments when communication isn’t just important—it’s a matter of survival. Whether due to disability, trauma, or immediate danger, such as domestic abuse or abduction, many individuals are unable to speak or interact openly when they need help the most. Yet, most communication tools available today require visible interaction—typing, speaking, tapping—which can be unsafe or impossible in critical situations. That’s where the inspiration for WhispersLink was born. I asked myself: What if there was a way to speak without words… to call for help without making a sound?

Driven by empathy and guided by research, I set out to create a solution that empowers silent communication in moments of crisis. WhispersLink is more than an app—it’s a lifeline for those whose voices are too often unheard.

What it does

WhispersLink is a life-saving mobile app designed to help people in danger or those who are non-verbal communicate silently and discreetly. It enables users to send emergency alerts through subtle micro-gestures, custom touch patterns, or even eye movement—without drawing attention. The app operates through a hidden interface on smartphones and can also connect to wearable devices for faster, more covert activation. Whether someone is facing domestic abuse, a medical emergency, or is unable to speak, WhispersLink offers a safe and silent way to call for help—when speaking up isn’t an option..

How we built it: We built WhispersLink using a modular architecture that combines React Native, Google's Agent Development Kit (ADK), and Firebase to create a real-time, AI-powered emergency communication tool for non-verbal individuals and vulnerable users.

✳️ Frontend Developed with React Native for cross-platform support on iOS and Android. Integrated Bluetooth Low Energy (BLE) for biometric sensor data such as heart rate, temperature, and eye movement. Used camera and gesture recognition for detecting gaze direction and possible distress.

🧠 AI Integration Leveraged the Google Agent Development Kit to create a conversational agent that: Interprets context from the user (including sensor data and behavioral cues). Generates meaningful emergency messages using Google Gemini or PaLM-based generative models. The agent operates either in the cloud or locally (depending on signal strength), ensuring uninterrupted communication even during network failures. ☁️ Backend A lightweight Node.js + Express backend handles: Emergency trigger events Notification dispatching Context logging We used Firebase Admin SDK for real-time alerts, Firestore for storing user profiles, and Cloud Messaging for sending push notifications. 🔐 Security & Privacy Sensitive keys (e.g., service accounts) are stored using environment variables and never committed to version control. However, I made a mistake in sending my service account to GitHub and now have a problem removing it. All biometric and emergency data are encrypted in transit and at rest. 🧪 Testing & Deployment Used Xcode Simulator and Android Emulator for iterative development. Final builds would have been deployed to TestFlight and Google Play Internal Testing for field testing with trusted users; however, due to time constraints and scope, we were unable to accomplish this.

Challenges we encountered: One of the biggest challenges we faced was testing with real devices, particularly integrating hardware such as cameras, biometric sensors, and phones. Simulating real-world emergency scenarios — such as detecting eye gaze, capturing temperature data, or simulating distress signals — was difficult without consistent access to physical devices.

We also underestimated the project's scope. As we explored the potential of the Agent Development Kit and the power of generative AI, the project's ambitions grew beyond what we could fully implement within the hackathon timeframe. This led to tough decisions about which features to prioritize. We made a mistake in sending our service account to GitHub and now have a problem removing it.

In the end, we couldn’t include everything we envisioned in our final submission, but we’re proud of the functional core we were able to build and the foundation we've laid for future development.

Accomplishments that we're proud of: Despite the challenges, we’re incredibly proud of what we accomplished during this hackathon:

✅ Built a working prototype of WhispersLink — a voice-free, intelligent emergency alert system that can assist users in distress using AI and biometric signals.

🤖 Successfully integrated Google’s Agent Development Kit, allowing us to create a context-aware agent capable of interpreting sensor input and generating real-time, empathetic emergency messages.

📱 Developed a cross-platform React Native app that communicates with biometric data sources and camera inputs, paving the way for accessible emergency communication.

🔐 Handled sensitive data responsibly, implementing secure key management practices and ensuring that biometric data is encrypted and protected.

🚨 Enabled real-time emergency triggers through a functional backend and Firebase integration — proving the concept that an AI agent can assist non-verbal users in urgent situations.

💡 Pushed beyond a basic chatbot — we created a system that understands context, processes real-world input, and acts proactively.

What we learned: How to build with Google’s Agent Development Kit (ADK) — We learned how to design, deploy, and fine-tune an AI agent that integrates generative capabilities and responds to real-world inputs.

📲 The complexity of real-time hardware integration — Working with cameras, BLE sensors, and device APIs taught us how nuanced and delicate hardware-software interaction can be, especially across iOS and Android.

🧩 The Importance of Simplifying Scope Early — We learned that ambition needs to be balanced with execution. Setting realistic, achievable goals upfront is key, especially when working with experimental tools and tight timelines.

🔐 Best practices for security and secrets management — After accidentally pushing a sensitive key, we quickly learned how to use .gitignore, .env, and Git history cleanup tools like BFG and git filter-repo to protect user data and credentials.

🌐 Cross-platform deployment and debugging skills — Building and testing a full-stack mobile app (with AI, backend, and sensor data) across multiple platforms and environments was a crash course in system-level thinking.

What's next for WhispersLink:📦 On-device AI for offline emergencies

We'll integrate lightweight TensorFlow Lite models so WhispersLink can detect emergencies even without an internet connection — critical for remote or disaster-stricken areas.

🧬 Advanced biometric signal processing We're expanding beyond heart rate and gaze to include temperature, pupil dilation, and motion patterns — enabling more accurate, passive distress detection.

🌐 Multilingual and accessible agent support Our AI agent will support multiple languages and regional dialects, and include text-to-speech, large font, and visual-only interfaces for differently-abled users.

🚑 Live emergency response integration We plan to connect WhispersLink to verified responders or emergency dispatch systems (e.g., via Twilio, RapidSOS, or national 911 APIs) to enable real-world help with a single gesture or gaze.

🔐 HIPAA-ready security & data anonymization We aim to make WhispersLink compliant with health data regulations while empowering users to maintain their privacy and control.

🤝 Partnerships with schools, hospitals & care centers We'll explore pilot programs with institutions that serve vulnerable individuals, particularly those with non-verbal conditions, autistic children, or the elderly.

WhispersLink is more than a project — it’s the beginning of a life-saving communication platform powered by AI. We’re excited to continue building it beyond the hackathon.

Built With

Share this project:

Updates