Matcha Vibe

Inspiration

Sen no Rikyu, the founder of the Japanese tea ceremony, explained its philosophy with this phrase: 一期一会 (ichi-go ichi-e): one time – one meeting.

Life is about seizing every moment. Business pitches, lectures, and personal conversations are all fleeting—so make them count.

The Problem: Remote meetings lack presence and depth—listeners get distracted, engagement is low, and tracking audience feedback is difficult. If you cannot perceive your audience's status, your meetings are left sub-par.

Inspired by the Japanese tea ceremony philosophy, we set out to enhance real-time engagement in virtual meetings:

The Solution: AI-driven avatars and immersive environments that make online meetings not just as good as in-person, but even better. Matcha and meditation apps like Otsuka ekkomi brought premium meditative experiences into the digital realm. We asked ourselves: How can we take this further and enable meaningful, premium interactions to be held online? We focused on the educational and healthcare benefits of AI companionship and metrics, and hence redefined virtual meetings to be more dynamic, engaging, and memorable. Thus, we came up with our real-time meeting analytics tools and responsive interactive experience we call Matcha Vibe. We create meetings that match your vibe.

Everyone does Zoom meetings. We turn Zoom meetings into personalized experiences tailored to real-time consumer feedback.

Why This Matters

  • 90% of communication is non-verbal cues, which are hard to track online. We capture real-time engagement metrics to analyze each meeting participant’s mood, attention, and interaction levels, allowing businesses and educators to adjust dynamically.
  • Seamless group experiences using Zoom’s API—turning meetings into engaging, responsive environments that feel as immersive as real-world interactions.

What It Does

Real-Time Speech & Gesture Analysis – The AI detects sentiment, engagement levels, and facial expressions to understand participant reactions.
Personalized AI Presence – The avatar adapts dynamically depending on the meeting flow from analytics.
From Mindfulness to Business Pitches – Whether it's a tea ceremony to guide relaxation or an AI-driven sales coach assessing audience attention, this system is open for personalized interactions for any situation.

By integrating AI-driven adaptability, MatchaVibe transforms how we engage in virtual meetings, making online collaboration more immersive and impactful.


How We Built It

  • Leveraging Zoom’s API, we integrated real-time analytics to track user engagement, expressions, and speech sentiment.
  • AI-powered avatars were developed using Groq for fast AI processing, ensuring smooth and intelligent avatar behavior.
  • Dynamic backgrounds were created using Luma Apps, adapting the meeting atmosphere in real-time.
  • Facial expression tracking was implemented to recognize stress levels and engagement, providing immediate visual or verbal responses.
  • Groq’s multimodal capabilities powered seamless text, vision, and audio processing, making the AI avatar highly responsive across different forms of input.
  • Gemini 2.0 Flash for sentiment analysis to interpret real-time user feedback.
  • ElevenLabs for AI-driven conversational speech and text-to-speech for seamless user interactions.
  • Luma AI transforms text, images, and video seamlessly to create immersive meeting environments and interactive backgrounds that react to real-time consumer metrics.
  • OpenAI APIs enhance AI-generated conversation, sentiment-driven avatar responses, and dynamic adaptability.

Challenges We Ran Into

  • Balancing AI responsiveness with natural human interaction—we refined avatar expressions to feel intuitive rather than robotic.
  • Processing real-time engagement metrics efficiently—Groq allowed us to minimize latency while running AI analytics.
  • Ensuring accessibility and inclusivity—different cultures have different stress triggers and relaxation cues, requiring adaptive feedback mechanisms.
  • Optimizing multimodal AI coordination—seamless real-time text, vision, and audio processing presented computational challenges.
  • Integrating multiple AI models (Gemini, ElevenLabs, Groq) into a single pipeline while maintaining efficiency and reducing response lag.

Accomplishments That We're Proud Of

Created a fully interactive AI-driven avatar that dynamically reacts to user speech, emotions, and gestures in Zoom meetings.
Successfully integrated real-time sentiment and behavior analysis to adjust meeting atmosphere and enhance engagement.
Built an AI presence that goes beyond text and voice responses, setting a new standard for digital interaction.
Utilized Groq’s multimodal processing to enhance responsiveness across multiple sensory inputs.
Implemented Gemini’s sentiment analysis to personalize meeting engagement.
Leveraged ElevenLabs AI voice processing to make virtual meetings more interactive and human-like. Enhanced real-time media interactions using Luma AI for seamless content adaptation.

By enhancing collaboration with AI-driven insights, Matcha Vibe exemplifies how Zoom’s tools can push the boundaries of digital interaction and connect people more effectively.


What’s Next for MatchaVibe?

Expanding AI Emotion & Gesture Recognition – Detect stress, disengagement, and excitement with even greater accuracy.
Integrating Voice Analysis for Deeper Insights – Use pitch and pacing analysis to understand user confidence and engagement.
Customizable AI Avatars for Different Use Cases – Expand beyond tea ceremonies to education, business, and therapy.
AI Coaching & Learning Tools – Guide users through public speaking, sales pitches, and mindfulness exercises using AI-driven feedback.

Matcha Vibe is redefining virtual engagement, blending AI-driven analytics, personalized avatars, and real-time interactions to create truly immersive digital experiences. From tea ceremonies to boardrooms, we make online meetings more human.

Built With

Share this project:

Updates