Inspiration
Mental health sucks. As much as we want to pretend it doesn't impact us, and that we're completely fine, the feeling of breaking inside is harder to hide than we think. The dark days spent laying in bed, where nothing seems to go right as the deafening silence of loneliness fills every corner, slowly swallowing you whole. In those moments, everything feels lost, and more than anything we just need to be heard. That’s what sparked ClarityVR the desire to create a space where people can voice their struggles without fear of judgment, and know they’re not alone. We wanted to use technology to bridge that silence transforming it into connection, comfort, and understanding. This project was born out of the idea that sometimes, what matters most isn’t fixing everything at once, but simply having someone, or something, that listens when you can’t bring yourself to reach out.
What it does
Our project creates a supportive companion in moments when people feel most isolated. Inside a calming VR environment, users can speak freely through their avatar, and the system listens—turning their voice into text, understanding the emotional weight behind it, and responding with empathetic advice that feels conversational rather than robotic. If someone just needs to vent, it listens and replies. If they’re looking for guidance, it offers supportive, therapist-style responses. And if they ever feel ready to take the next step, the system can connect them with a list of top-rated therapists in their area. At its heart, the project is about lowering the barrier to being heard. It doesn’t try to replace real human connection but instead fills the silence during those difficult moments—reminding people that their voice matters and that support is always just one conversation away.
How we built it
We built the system in two main parts: the backend and the VR front-end. On the backend, we developed a Node.js/Express server that acts as the hub for all the core features. It handles speech-to-text so users can speak naturally, therapist-style advice generation using an LLM to provide empathetic and conversational responses, and text-to-speech to return those responses in a voice that feels personal. In addition, we created a separate API that surfaces highly rated local therapists so that, if users feel ready, they can take the step from digital support to professional help. On the front-end, we built a VR environment in Unity with C#, designed to immerse users in a calming, supportive space. Here, they interact through avatars, explore the environment, and experience the conversation in a way that feels safe and human. The VR side communicates with the backend in real time, bridging spoken input and generated responses seamlessly. Together, the two sides create an experience where technology feels less like a tool and more like a companion: a system that can listen, respond, and gently guide people toward connection and support.
Challenges we ran into
Most of our issues were related to the front end, especially since it was our front-end team’s first time working with Unity. We started off struggling just to get the environment running—it was really buggy and often wouldn’t build. On top of that, we ran into a lot of problems with the microphone, which would sometimes break the entire application. On top of these bigger hurdles, lots of small barriers along the way made the whole experience tricky overall.
Accomplishments that we're proud of
- Angela: Proud of stepping into backend development and learning how to piece together different technologies. She gained experience with both ElevenLabs and OpenAI APIs and is especially proud that she was able to integrate them successfully to power the core features of the app.
- Jonathan: Proud of taking ownership of designing and building custom API calls that tied the project together. He is also proud that he could support his teammates whenever they ran into difficult bugs, helping solve problems quickly and making sure the team stayed on track.
- Kashish: Proud of the work on Unity front-end development, where she built models and refined the user experience inside the VR environment. She is especially proud of ensuring that the visual and interactive pieces worked smoothly and looked polished within the headset.
- Daksh: Proud of tackling some of the trickiest Unity challenges, especially microphone integration and connecting the app to the backend. He is also proud of overcoming constant environment issues during development, which allowed the whole VR experience to function as intended.
What we learned
- Angela: Learned how to work with backend technologies and gained confidence in integrating third-party APIs like ElevenLabs and OpenAI, seeing how different services can be combined to create a working product.
- Jonathan: Learned more about designing API calls and handling server logic, as well as the importance of collaboration when stepping in to debug issues and help teammates solve problems efficiently.
- Kashish: Learned a lot about Unity development, especially how to bring models into a VR environment and make them functional, while also gaining experience in refining the user interface for smoother interactions.
- Daksh: Learned how to connect Unity to a backend service through microphone integration, and also gained a deeper understanding of troubleshooting and problem-solving in a challenging VR development environment.
Built With
- c#
- elevenlabs
- express.js
- node.js
- openai
- unity

Log in or sign up for Devpost to join the conversation.