TLDR
We believe emergency services are broken. In 99% of countries, you can’t text 999/112 (in the UK you have to pre-register, in Germany you can only use fax??), you may not know the right number, you might not share the language, or you may be in danger if you speak. SOS Bridge lets you reach any emergency line worldwide without making a sound. All you have to do is tap pre-set buttons and type; we compress the details, call emergency services for you and stream a clear voice message to them in real time using Eleven Labs. When it’s the worst day of your life, you should only have to press one button.
quick aside
Hey, we are Tomas and Cat, recent CS grads now working as software engineers who love doing hackathons together! We can promise you that nothing we've built today is hard coded <3 <3 This project was built in 36 hours as part of ElevenLabs × LFH Build Weekend!
Why we built it
Picture this: you’re hiding in a wardrobe while someone breaks into your flat. Your hands are shaking, your mouth is dry, and the last thing you can risk is noise. You try to text 999 - but the UK requires pre-registration. In most EU countries you’d be out of luck entirely, and in Germany the best you could do is… a fax machine.
We built SOSBridge for every moment like that:
Silent communication in danger – Speak and you’re exposed. Type and stay hidden.
Language barriers – You might not speak the local language. We make sure your message still gets across.
Stress barriers – Panic or anxiety can choke a voice. Typing (or tapping icons) is easier.
Disabilities – Deaf, hard of hearing, speech-impaired users deserve the same response time as anyone else.
Wrong number confusion – Every country has its own code; sometimes each service has its own line. We look it up so you don’t have to.
How SOS Bridge works
- Open the web-app and start entering simple details.
- Silent chat. You type; we open a connection to the relevant emergency service.
- Natural-speech relay. Eleven Labs turns that summary into lifelike speech and pipes it down a live Twilio voice call to 999/112 (or the local equivalent).
- Two-way communication. Dispatcher replies are transcribed and shown back to you as chat so you never have to speak out loud.
What we managed to build during the hackathon
- A working web UI with a simple, focused design
- A live Twilio phone connection that simulates calling emergency services (mocked with our own phone number)
- Two-way text communication between the user and the responder
- Real-time speech playback using Eleven Labs (reads out user messages over the call)
What still needs to be implemented
- Logic to automatically route the call to the correct emergency service depending on country.
- OpenAI integration to summarise key details and translate messages in different languages.
- A more flexible, interruptible flow - right now, the call behaves like a turn-based interaction. In a real situation, we’d want the user to be able to type something urgent at any time and have that message played immediately, even if the responder is speaking.
Challenges we ran into
One of the biggest challenges was setting up the three-way communication between the user, Eleven Labs, and the responder.
Initially, we tried using Eleven Labs' Conversational AI, hoping it could handle a conversation with the emergency agent (via voice) and the person in distress (via text). This turned out to be a niche use case, and no matter how many work arounds or patches we could think of, we could not get both "calls" to be towards the same agent. In reflection, we should have noticed that the system was built for simultaneous communication of voice and text but for only one recipient!
Eventually, we pivoted: instead of using Conversational AI, we used Eleven Labs' standard TTS to generate audio files and then streamed them into a live Twilio call. This worked: but came at the cost of flexibility. We had to manually manage timing, queues, and playback, and lost out on interruption handling and the natural responsiveness that Conversational AI provides.
Technologies used
We used Bolt.new to quickly scaffold our app with our preferred stack:
- Next.js + TypeScript for the frontend
- Tailwind CSS for styling
- Twilio to handle phone calls and stream audio
- Eleven Labs to generate realistic voice messages from text
- Ngrok to gateway our traffic to our local servers
Note:
The system is only able to call authorised phone numbers (following UK law) so we used our personal phone numbers to simulate calling emergency services (we don't actually want to call them when testing of course!). Therefore, the public URL does not initiate a call or you would be spamming our personal phones :D However, you can run it yourself locally by filling in the details in the .env file with your personal phone number!
Built With
- bolt.new
- elevenlabs
- next.js
- tailwind
- twilio
- typescript


Log in or sign up for Devpost to join the conversation.