Inspiration

A Dreamer’s Decision was inspired by the liminal mental state that exists between dreaming and waking, where thoughts feel real and fear feels physical. Many people experience this state but rarely have the language or space to explore it safely. We wanted to transform that moment into an interactive story that re frames fear as something responsive rather than something that simply happens to you. The project draws from personal experiences with vivid dreams and the idea that awareness can change how the mind reacts under stress.

What it does

A Dreamer’s Decision is a narrative-driven VR experience that places the player inside a dreamlike environment that reacts to their attention and presence. It begins with a stereoscopic chroma keyed video featuring a monologue on dreams. This was achieved using the Cannon RF Dual Fish eye lens camera and a green screen. In unreal we were able to bring this video in a material which used a media texture of the source video (which appeared as a side by side video), adjust the uvs and feed them into each eye individually to achieve the desired 3d effect.

The player is then placed into a full 3d Environment of a dark bedroom. We used the meta xr plugin hand tracking capabilities to attach a spot light to the users left hand, allowing them to see arnd the dark room. The user eventually sees a ghostly figure emerge from the darkness. This figure was inspired by sleep paralysis and the dark figure that people often report to have seen during the phenomenon. As the figure approaches it is drawn to the users camera so even if they move it will always seem like its inching closer and closer. Eventually your alarm clock goes off.

At this point you are brought to the final section which features the ghostly apparition but in a lighter color. You are then able to speak with it about dreams. During this we put the user inside a 360 video we captured using the insta 360 X5 360 camera. We learned about the process of using Insta 360 studio software to stitch the footage, and edited them into one cohrent clip using davinci resolve.

The conversational AI was made possible using the Convai plugin which we used to create a story for the ghost. The LLM we chose as the backend was a Gemini flash model. The Ghost was given a back story that made it into sort of an ethereal therapist. It begins by asking you a random question about dreams, and the user then engages in a natural conversation with the apparition. This can go on as long as the user likes, that way they can really get int a deep and existential conversation about dreams, and perhaps learn something about themselves.

How we built it

We used Unreal Engine 5.5 to create the main experience on Quest 3 using the meta xr plugin. This allowed us to utilize pass through and hand tracking capabilities in our experience. We also used the Styly platform to create an ad hoc experience, sort of like an appetizer, which used the 3d assets we created for the bedroom segment. To capture the footage we used the cannon R6 with the Stereo lens, and the Insta 360 X5. We used Davinci resolve to edit the clips together, and even uploaded the outputted video to Deo VR platform. We used Convai to create the conversational, gemeni AI ghost.

Specifically, the Styly platform was something our team was pleasantly surprised by. It was super intuitive to use, allowing people without dedicated VR hardware to still explore the core atmosphere and narrative ideas in AR. STYLY proved highly intuitive and convenient for creating web-based AR experiences, allowing us to quickly learn it during the short hackathon.

We also created custom music for each section using fruity loops studio

Challenges we ran into

We were remotely connecting to a PC in NY which had Unreal Engine set up. We used a macbook and Parsec to remotely build the app on this pc. We knew it would be a challenge because each test on a headset had to be packaged, uploaded to the cloud, downloaded and loaded onto our headset. This was an arduous and time consuming process especially when it came to the LLM integration since we needed to test that speech recording functions on headset. Eventually Chris returned to NY early (Saturday evening) to avoid the storm, and was able to get it working overnight before the 11am deadline since he was near the computer and could test properly without the large time sinks.

Another challenge we ran into was creating the materiel in unreal engine that would turn a side by side green screen footage taken with the Canon and make it into a chroma keyed, stereoscopic element that we could place in 3d space. The mentors Woody and Alex were super helpful with these problems, ad with their help we gt it working

Accomplishments that we’re proud of

We are proud of creating an experience that communicates a complete narrative without dialogue or instructions. The project successfully uses interaction and atmosphere to guide the player through a story that feels personal and immersive. Achieving a sense of progression purely through environmental response was a major milestone and validated the story lining approach.

What we learned

Never use parsec for a hackathon again. In all seriousness though, we learned about the stitching process for the insta 360 camera, we leaned about UV editing and chroma keying in Unreal Engine and we learned about the Styly platform.

Built With

Share this project:

Updates