Kopi: Smart AI-Powered Digital Twin for Everyone
Inspiration
We’ve all had that moment when a key teammate goes on vacation — or leaves the company — and suddenly no one knows how their framework works or where that crucial dataset lives. Work slows, meetings stall, and valuable time is lost.
That’s what inspired Kopi. We imagined a world where everyone has their own AI-powered digital twin — a version of themselves that remembers their work, understands their expertise, and can even connect with other people’s twins to collaborate and communicate.
With Kopi, your digital twin can join meetings on your behalf, explain your ideas, and retrieve the exact company knowledge someone needs to move forward. It’s like having every teammate available, all the time — sharing expertise, context, and continuity, even when they’re offline.
What It Does
Kopi creates an AI-driven digital twin that joins meetings, recalls past work, and responds intelligently using real company knowledge. Each twin can interact with others — sharing context and keeping projects moving even when people aren’t around.
Main capabilities
- Joins meetings automatically and listens like a human
- Answers questions using company data via RAG retrieval
- Summarizes conversations and automates follow-ups
- Connects with other digital twins for seamless collaboration
How We Built It
We built Kopi using modern real-time AI and communication tools.
- Voice + Video: LiveKit Agents for interactive meeting participation
- Speech Recognition: AssemblyAI Universal Streaming STT
- LLM Intelligence: OpenAI GPT-4o for reasoning and response generation
- Text-to-Speech: Cartesia Sonic for natural voice output
- Knowledge Retrieval: Elasticsearch + VoyageAI embeddings for semantic RAG search
- Turn Detection: Multilingual VAD for dynamic voice detection
When someone speaks in a meeting, Kopi performs a real-time RAG query, retrieves relevant information, and generates a context-aware reply — all in seconds.
Challenges We Ran Into
- Making sure RAG retrieval completes before generating responses
- Balancing real-time voice flow with API latency
- Managing PyTorch dependencies for turn detection in Docker
- Getting the model to consistently use retrieved context rather than general knowledge
Accomplishments That We're Proud Of
- Built a fully interactive AI meeting participant using LiveKit
- Integrated speech, reasoning, and knowledge retrieval in real time
- Designed scalable AI twins that can communicate and share context
- Created a foundation for context-aware, voice-responsive AI collaboration
What We Learned
We learned how to combine real-time communication with contextual AI reasoning — turning static knowledge bases into dynamic, conversational assistants. We also explored how AI twins can cooperate, exchanging data and decisions to enable smarter teamwork across departments and organizations.
What's Next for Kopi
Kopi’s future lies in universal accessibility — being wherever you are.
- Slack / Discord / Zoom: Kopi joins and responds directly in chats or calls
- Terminal Integration: Ask technical or project questions instantly
- Education: Digital twins for students, teachers, and staff to manage learning, grading, and tutoring
- Healthcare & Enterprise: Twins that assist professionals by surfacing key context at critical moments
Future features
- Emotion-adaptive voices for more natural interactions
- Multi-twin collaboration networks across companies
- Calendar-aware scheduling and real-time action execution
- Long-term personalized memory that evolves over time
Wherever you go, Kopi goes with you — your digital twin, always learning, always available.
Built With
- assemblyai
- claude
- deepgram
- drive
- elasticsearch
- elevenlabs
- figma
- fishaudio
- gmail
- googlesdk
- hedra
- letta
- livekit
- meet
- nextjs
- postgresql
- python
- voyageai

Log in or sign up for Devpost to join the conversation.