Our Program: SageVR

Envisioning yourself in a positive future can be difficult, especially in the depths of despair. But what if, in your darkest hours, you can not only envision yourself positively, but actually look at a future you, talk to a future you? Powered by cutting-edge conversational AI and hosted in an immersive VR setting, SageVR lets you converse with an aged-up version of yourself in a tranquil setting, absorb to your own wisdom, and lose yourself inside a brighter future — and through all that, you will realize that everything will be alright. SageVR is a proof of concept for a technology poised to transform the therapeutic landscape, a telescope into the infinite power of VR in mindfulness and therapy.

Inspiration

“Individuals who cannot imagine themselves behaving quite differently than they are currently behaving are likely to become trapped in their current behavioral course. Unable to cognitively counter their worries over being unemployed, alone, depressed, or engaged in crime in a believable, self-relevant way, these adolescents may be less motivated to avoid delinquent activity and to take the directive action necessary to prevent their feared selves from being realized” — from “Possible Selves and Delinquency” by Daphna Oyserman and Hazel Rose Markus, published in the Journal of Personality and Social Psychology (1990).

The idea that cultivating a positive outlook on our future selves can have a positive impact on our present well-being inspired us to develop SageVR, a unique, multimodal approach to improving well-being by allowing individuals to communicate with their “future selves.” These virtual avatars offer positive reinforcements and encouraging messages that help individuals regain clarity and insight into their long-term aspirations.

See “References” for a full list of our sources.

How we built it

Model: we created our model of the “future you” in five steps:

  1. We took photos of the user’s face from multiple angles.
  2. We aged-up each photo of the user’s face using FaceApp.
  3. We constructed a 3D model of the user’s aged-up face using PolyCam.
  4. We imported the face model into Blender and added it to the model of a body.
  5. We imported the full-body model into Unity, where we integrated it into our environment.

Scenery: we created the forest scene by downloading various assets in Unity and uploading the scene to the headset.

ChatGPT: we constructed our ChatGPT prompts to optimize fidelity in these three areas:

  1. Adherence to Character: ensure that the AI speaks as if it is an older version of the user rather than as if it is merely a confidant or therapist.
  2. Naturalness of Speech: ensure the AI mimics natural speech as much as technological limitations allow.
  3. Conversationality: ensure a degree of continuity in between each exchange between the user and the AI

ChatGPT API: We used ChatGPT 3.5 Turbo and did prompt engineering for it to converse with the user as the older version of themselves.

Speech-To-Text: For the speech to text we utilized the Voice SDK by Meta. When we press and hold a button the microphone records the speech input and OnFullTranscription event the string message is sent to the ChatGPT API.

Integration: The integration included combining the Unity environment with the 3D avatar and speech to text functionality with the ChatGPT API.

Challenges we ran into

  1. ChatGPT:

Our utilization of ChatGPT is hindered by the current limitations of natural language processing. ChatGPT’s responses are varied and natural-sounding, yet still formulaic to an extent. There are edge cases where the Chat goes out-of-character, and because of token limitations, conversational continuity is not guaranteed.

  1. Collaboration: Unity is not conducive for the collaboration techniques we are familiar with, such as Git. In response to this limitation, we decided to split the group into distinct teams — scenery, 3D modeling, speech-to-text, ChatGPT, and text-to-speech. We incrementally implemented each component as we proceeded. In the end, integrating all these various steps proved to be challenging.

  2. Text-to-Speech:

We planned to integrate text-to-speech into the program so that the user can hear the avatar speaking, yet our unfamiliarity with the Meta Quest hindered our implementation of this idea. In face of this setback, we decided to sideline this functionality and display the text instead. We believe that this is an acceptable alternative that can be remedied by more experienced developers.

  1. Using AI to age up face models:

We originally planned to use Stable Diffusion (SD) to age-up each image of the face. SD is capable of taking in an image and modifying it through a text prompt, which would have provided greater customization potential for the user’s “future self.” There exists the technology to fine-tune SD models for this purpose; however, none of our team members were proficient enough with AI to make use of them. Given that FaceApp was a viable alternative to aging up the user, we decided to sideline our SD idea for the Hackathon and revisit it, if necessary, in the future.

Accomplishments that we're proud of

Integrating each aspect of our project proved the most difficult challenge. We had three components we needed to connect: The scene, speech-to-text, and ChatGPT.

What we learned

Our utilization of ChatGPT is hindered by the current limitations of natural language processing. ChatGPT’s responses are varied and natural-sounding, yet still formulaic to an

What's next for SageVR

We suggest several approaches to improving SageVR:

  1. Fine-tuning ChatGPT, or even creating our own GPT model, to interact with the user more realistically and personally
  2. Creating more customization options for the user, such as giving the “future self” different appearances, different personalities, and interacting with their “future self” in different environments
  3. Improve distributability and ease-of-access for our product.
  4. Add real-time audio output and connect 3d model to it via technologies like Nvidia Omniverse

References

Oyserman, Daphna, and Hazel Rose Markus. “Possible Selves and Delinquency.” Journal of Personality and Social Psychology, vol. 59, no. 1, 1990, pp. 112–25, https://doi.org/10.1037//0022-3514.59.1.112. Accessed 9 Apr. 2023.

van Gelder, Jean-Louis, et al. “Interaction with the Future Self in Virtual Reality Reduces Self-Defeating Behavior in a Sample of Convicted Offenders.” Scientific Reports, vol. 12, no. 1, Feb. 2022, https://doi.org/10.1038/s41598-022-06305-5. Accessed 18 Apr. 2022.

Disclaimer

The content provided by SageVR is for informational purposes only and is not intended to serve as professional advice. SageVR makes no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability with respect to the content provided.

Any reliance you place on such information is therefore strictly at your own risk. In no event will SageVR be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this website and its content.

SageVR reserves the right to modify the content at any time without notice. The use of any information or materials on this website is entirely at your own risk, for which SageVR shall not be liable. It shall be your own responsibility to ensure that any products, services, or information available through this service meet your specific requirements.

By using this service and its contents, you acknowledge that you have read this liability disclaimer and agree to its terms and conditions.

Built With

Share this project:

Updates