Stream’s cover photo
Stream

Stream

Technology, Information and Internet

Boulder, CO 18,081 followers

Stream powers Chat, AI Moderation, Activity Feeds, Video & Audio for billions of global end-users.

About us

Stream helps apps build real-time experiences that scale. Our chat, moderation, video, audio, and activity feed APIs and SDKs are powered by a global edge network and enterprise-grade infrastructure. Our platform empowers developers with the flexibility and scalability they need to easily build rich conversations and engaging communities.

Website
https://getstream.io
Industry
Technology, Information and Internet
Company size
51-200 employees
Headquarters
Boulder, CO
Type
Privately Held
Founded
2015
Specialties
Activity Streams, Newsfeeds, Cloud Hosting, and Big Data

Locations

Employees at Stream

Updates

  • Stream reposted this

    View organization page for Anam

    6,471 followers

    Anam is now integrated with Stream's Vision Agents. Vision Agents is Stream's open-source framework for building multimodal AI agents: agents that hear, see, and respond in real time, with orchestration, state management, and low-latency transport handled for you. Drop Anam in as the face layer and the agent joins the call with a live generated face that reacts to what it sees and hears, powered by Cara-3. What you can build: - A tutor reading student body language and eye contact - A sales coach catching the micro-signals a rep misses in the moment - An onboarding agent that sees confusion before it escalates Thanks to Neevash Ramdial and Brooke S. at Stream, and Sebastiaan Van Leuven and Anna Buckley at Anam. Docs: https://lnkd.in/eskyBf-9

    • No alternative text description for this image
  • View organization page for Stream

    18,081 followers

    New Partnership: Stream × Anam We’re partnering with Anam to bring real-time, human-like avatars to AI agents built with Vision Agents. With just two lines of Python, your agent can join a live Stream video call and react in real time. What makes Anam a match for Vision Agents: - Create bespoke avatars from a photo or prompt - Every frame is generated live from the agent’s voice - Ultra-low latency for natural pauses and turn-taking - Realism validated by independent benchmarks - Native lip sync across 50+ languages When agents feel present, entirely new product experiences become possible: - Coaches who listen to your tone of voice, not just what you say - Mock interviewers who stay believable for full sessions - Tutors who hold real conversations - Patient intake that feels less clinical - Daily AI companions with a consistent, familiar face Start Building: 🔗 Anam: https://lnkd.in/embqqtKz 🔗 Integration: https://lnkd.in/epYx4R53 🔗 Cookbook: https://lnkd.in/eskyBf-9

    • No alternative text description for this image
  • View organization page for Stream

    18,081 followers

    The oldest debate in pizza, and now there's a voice agent willing to settle it. 🍍 Max Kahan built a real-time voice agent using Inworld AI's Realtime API, running on the Vision Agents framework. Ask the question, get a (strongly held) opinion back in real time. It's a small demo with a bigger point: spinning up a production-ready, low-latency voice agent no longer takes weeks of plumbing. A few API calls, and you've got a real conversational experience. If you're exploring voice agents, this is a clean place to start: → Quickstart: https://lnkd.in/ea6FWsCh → Example code: https://lnkd.in/eVR7na8M

  • View organization page for Stream

    18,081 followers

    Creators are tired of building on rented land. Algorithm shifts, declining reach, unpredictable monetization—it’s a fragile foundation for a business. That’s exactly what Popup set out to fix. Stream Video helped Popup build a platform where creators: → Own their audience data → Monetize directly → Host branded live events at scale But here’s the part that stands out: They went from zero to a production-ready livestreaming product in just a few days. The result? Global live events with zero quality complaints. That’s when you know the infrastructure is doing its job. If you’re building anything with livestream video, this is worth a read: https://lnkd.in/ehJAcDSq

    • No alternative text description for this image
  • View organization page for Stream

    18,081 followers

    Building voice AI, but don't want to send your data to a cloud TTS provider? Check out our deep dive into the 6 best on-device, open-source text-to-speech models you can run privately — on laptops, mobile devices, even Raspberry Pi. Here's the lineup: 1️⃣ VibeVoice (Microsoft) — multi-speaker, long-form audio, ~300ms first-chunk latency 2️⃣ Qwen3-TTS (Alibaba) — voice cloning, design, and customization in 10 languages 3️⃣ Neu TTS (Neuphonic) — enterprise-ready, on-device, with 50+ AI voices Pocket TTS (Kyutai) — 100M params, CPU-only, ~200ms latency, built into Vision Agents 4️⃣ TADA TTS (Hume AI) — ~2x faster than most HuggingFace TTS models, great for long-form 5️⃣ Kitten TTS (KittenML) — under 25MB, runs anywhere, perfect for IoT and wearables Each model is benchmarked against real-world criteria: naturalness, latency, privacy, language support, and customization. We also included working code samples integrating each into our open-source Vision Agents SDK. Whether you're building a healthcare app, AI podcast, or a customer service agent, there's an on-device TTS model for your use case. 📖 Read the full guide → https://lnkd.in/eF8YJhsp

    • No alternative text description for this image
  • View organization page for Stream

    18,081 followers

    Most dating apps don't fail at matching. They fail at everything after. The apps that actually stick around invest in the boring stuff: ❤️ Chat that doesn't lag ❤️ Video and voice (to stop hiding behind curated photos) ❤️ Moderation that makes users feel safe enough to stay ❤️ Matching that's thoughtful, instead of endless Swipes are easy. Getting someone to come back the next day is the hard part. If you're building in this space, worth a read: https://lnkd.in/gXfxwyTi

    • No alternative text description for this image
  • Stream reposted this

    🔧 We're working with Stream to bring Tencent RTC to Vision Agents — an open–source framework for building real–time voice & vision AI agents. Early access is now open for developers who want to test low–latency edge transport in their AI pipelines. It's an early access build — we're actively improving it and want your feedback. Try it now 👉🏻 https://lnkd.in/guzhkDEX

    • No alternative text description for this image
  • View organization page for Stream

    18,081 followers

    Shift workers shouldn’t need WhatsApp to do their jobs. Deputy built messaging directly into their platform—with privacy, scheduling, and scale in mind. Think: → No personal phone numbers → Message only people on shift → Auto-grouping by role + location Not another Slack clone—something purpose-built for shift teams. Full customer story: https://lnkd.in/e-4xgcyt

    • No alternative text description for this image
  • View organization page for Stream

    18,081 followers

    We just released Vision Agents v0.5.0 👀 What’s new: → Faster TTS with lower latency (Deepgram via WebSockets) → Better stability with improved memory + connection management → Anam avatar support for real-time, synchronized video agents → LocalEdge for running agents directly on your machine → Helm charts for Kubernetes deployment → Expanded plugins (AssemblyAI, HuggingFace, Anthropic, AWS, more) Plus new examples for things like: → real-time coaching agents → video moderation pipelines → meeting assistants with speaker awareness Read the full release → https://lnkd.in/ekd5G4et

Similar pages

Browse jobs