Inspiration
In a world that never stops updating, our attention spans never get a break. We scroll endlessly, consume headlines without absorbing them, and drown in information that never feels human. So we asked a simple, almost rebellious question:
“What if news didn’t need to be read at all — what if it could simply speak to you?”
That question became T-Flash, a calm, voice-first news companion that doesn’t demand your eyes, only your curiosity. Instead of doomscrolling, you listen. One tap, and T-Flash turns today’s top headlines into smooth, podcast-style updates that fit into your day, whether you’re walking to class, driving to work, or winding down at night.
What it does
T-Flash is an AI-powered audio news companion that delivers the world’s updates in a voice you can trust. Here’s what happens behind a single tap:
Fetches real-time headlines from NewsAPI based on your selected category.
Summarizes stories with Google Gemini, transforming long articles into crisp, natural-sounding insights.
Generates lifelike narration through ElevenLabs’ neural text-to-speech engine.
Streams audio instantly from Supabase Storage, no buffering, no clutter.
The result? A fully automated pipeline that turns text headlines into humanlike audio news within seconds, all accessible from a clean native iOS app interface.
How we built it
Automated Pipeline with n8n:
At the core of T-Flash lies an n8n-powered automation workflow that orchestrates the entire process — from trigger to audio output. The pipeline begins when a user selects or subscribes to a news category (e.g., tech, politics, sports). An n8n Webhook Trigger captures this request and instantly calls the NewsAPI, fetching the latest relevant headlines.
We leveraged Google Gemini not just for summarization, but for understanding context. Each article is parsed for its key insights, tone, and sentiment, enabling the system to also identify content categories dynamically to tailor user feeds.
Once the summaries are ready, ElevenLabs TTS converts them into studio-quality speech using neural voice synthesis. Its expressive tone and pacing make each flash feel like a brief news podcast rather than an automated voice. We optimized latency and file compression for smooth, real-time streaming on iOS.
The resulting audio files are securely uploaded to Supabase Storage, where they can be streamed instantly in the Flutter-based native app for both iOS and macOS. We built an elegant minimal UI: Just one tap to play news flashes, clean, distraction-free, and dark-mode ready. The app fetches new audio directly from Supabase once the n8n pipeline completes.
Additionally, n8n schedules automated newsletters and audio digest emails, delivering category-based updates to users who subscribe, extending T-Flash beyond the app and into the inbox.
This entire pipeline runs autonomously, end-to-end, no manual refresh, no bottlenecks, just real-time audio news generation.
Challenges we ran into
API authentication errors: balancing multiple keys (Gemini, ElevenLabs, NewsAPI) under tight rate limits.
Latency optimization: reducing text-to-speech turnaround time to under 6 seconds per story.
Voice-UX design: rewriting summaries that sound good when spoken, not just when read.
Accessibility balance: making the UI intuitive for all users, including those relying on voice output.
Accomplishments that we're proud of
Built a fully automated AI pipeline in one weekend learning a fairly new technology such as n8n workflows, from news input → AI summary → audio output, without manual intervention.
Turned raw headlines into a voice experience, not just text but pushing the boundary of how people consume news.
Created something accessible : T-Flash isn’t just for readers; it’s for drivers, multitaskers, and visually impaired users who rely on sound.
Proved an idea can become an experience: instead of building a typical news app, we built a conversation between AI and the user.
What we learned
Building responsive AI workflows with automation tools like n8n
Integrating multiple APIs (LLM + News + Voice) in a seamless pipeline
Crafting summaries that are clear when spoken, not just readable
Realizing how powerful audio-based UX can be for accessibility and engagement
What's next for T-Flash
- Personalized voice profiles and accents
- Multi-language news delivery
- Scheduled “Morning Flash” and “Evening Flash” digests
- Mobile widget for one-tap, hands-free playback
- Expanding newsletter into fully AI-personalized digests
Built With
- elevenlabs
- gemini
- javascript
- json
- n8n
- newsapi
- next.js
- supabase

Log in or sign up for Devpost to join the conversation.