Inspiration

We wanted something that fits naturally into daily routines instead of adding “another screen” to manage. The mirror is the one place we already stand in front of every morning, so we asked: what if the mirror could quietly become your command center—without needing touch? That became EchoGlass: a normal-looking mirror that wakes into a dashboard, controlled by voice through Alexa, and enhanced with an AI-powered virtual try-on experience.

What it does

EchoGlass looks like a regular mirror, but behind the glass it can display:

  1. To-do list (add/remove/complete by voice)
  2. Schedule / reminders
  3. Live weather
  4. YouTube playback (hands-free viewing on the mirror)

It also has virtual try-on:

  1. Paste a clothing product link
  2. Take your photo
  3. Gemini generates an image of you wearing the outfit and shows it on the mirror

How we built it

  1. On-device display (BeagleY AI): A BeagleY AI runs the system locally and drives the display behind the mirror. It opens the dashboard in a browser so the mirror acts like a dedicated kiosk screen.

  2. Frontend: A Next.js dashboard hosted on Vercel for fast UI iteration and easy deployment.

  3. Backend & data: Supabase stores todos and logs so the mirror UI stays in sync.

  4. Voice integration: An Alexa Skill (AWS Lambda) maps spoken intents (like “add milk to my list”) into secure API calls to our Vercel endpoints.

  5. Virtual try-on pipeline: We fetch the product image from the link, then pass it (plus the user photo) to Gemini to generate a try-on result.

Challenges we faced

  1. Alexa intent routing: Alexa often prefers built-in behaviors (like reminders), so we designed unique utterances and kept sessions open with reprompts.

  2. API security: We protected endpoints using a shared secret header and rate limiting to avoid random public requests.

What we learned

EchoGlass taught us how to connect voice → cloud → database → on-device UI in a way that feels instant. We learned a lot about Alexa NLU design, secure webhook-style APIs, and building AI workflows where reliability matters as much as the “wow” factor.

Accomplishments that we're proud of

  1. Hands-free control that actually works: Alexa voice commands reliably add/remove/complete to-dos and trigger mirror actions, so the mirror doesn’t need to be a touchscreen.

  2. End-to-end sync (voice → cloud → mirror): Spoken commands flow from Alexa Skill → AWS Lambda → secured Vercel API → Supabase → the mirror dashboard, updating the UI in near real time.

    1. Virtual try-on from a shopping link: We created a pipeline that takes a product URL, pulls the clothing image, captures a user photo, and uses Gemini to generate a try-on preview shown directly on the mirror.

What's next for EchoGlass

  1. Better virtual try-on quality: Add pose/segmentation alignment (MediaPipe), improve garment extraction from product pages, and support multi-angle try-on or “fit” views.

  2. Deeper Alexa experience: More natural multi-turn conversations (e.g., “what time?” follow-ups), better error recovery, and clearer prompts to avoid built-in intent conflicts.

  3. Video calling built into the mirror: Add one-tap/voice-start video calls (e.g., Jitsi) so the mirror becomes a communication hub.

Built With

Share this project:

Updates