Inspiration
Live conversations rely heavily on unspoken signals like tone of voice, facial expressions, gestures, and timing. For many autistic adults, interpreting these cues in real time can be overwhelming and cognitively demanding, especially when signals conflict with spoken words.
Most existing tools address social communication either before interactions (through training) or after (through analysis), leaving the most difficult moment, during the conversation itself, unsupported. We wanted to explore how immersive technology and AI could provide real-time awareness without interrupting interaction or directing behavior.
What it does
ClearCue is a real-time social cue awareness app built on the Raven Resonance smart glasses platform. It uses a cloud-based multimodal AI model to analyze audio and visual input during live conversations.
The system detects patterns such as emotional tone shifts, facial expression changes, and conversational pacing, and surfaces these cues in a calm, non-instructional way. ClearCue does not coach behavior, suggest responses, or evaluate performance. Users decide whether to engage with the cues, adjust their behavior, or ignore them entirely.
ClearCue also supports optional, temporary post-conversation summaries that users can review or clear at any time.
How we built it
We designed ClearCue as a user-facing app running on smart glasses, using real-time audio and video streams as input to a cloud-based AI model. The model processes multimodal signals to infer conversational context, which is then translated into subtle visual indicators on the glasses.
The system was designed with a strong emphasis on low cognitive load, hands-free interaction, and user control. We intentionally avoided instructional or prescriptive outputs, focusing instead on awareness and clarity.
Challenges we ran into
One of our main challenges was building on the Raven Resonance platform without prior experience or access to physical hardware. Our team had never used Raven before, and we did not have smart glasses available during the hackathon.
To work around this, we relied on laptop-based simulations to prototype and validate the core experience. This required us to think carefully about how real-time audio and video input would translate to a wearable context, and to design our UI and interaction patterns in a way that would remain calm, glanceable, and non-intrusive once deployed on actual hardware.
Working without hardware also meant we had to be intentional about latency assumptions, user attention, and visual density—designing for the constraints of smart glasses even while testing in a simulated environment.
Accomplishments that we’re proud of
- Designing a real-time, non-instructional assistive system for live conversation
- Building a working pipeline that analyzes audio and visual input in real time
- Centering autistic user autonomy and cognitive load in every design decision
- Creating a product concept that bridges technical feasibility with ethical UX
What we learned
We learned that accessibility is not just about adding features, it’s about deciding what not to show. Reducing ambiguity without introducing new pressure requires restraint, clarity, and deep respect for user agency.
We also learned how important it is to design AI systems around real-world use, not just model performance.
What’s next for ClearCue
Next steps include user testing with autistic adults, refining cue presentation based on feedback, and exploring personalization options. We’re also interested in expanding ClearCue’s accessibility framework to other neurodivergent use cases while maintaining the same core principles of autonomy, clarity, and trust.
Built With
- openai-api
- python


Log in or sign up for Devpost to join the conversation.