Inspiration

We wanted to create a collaborative tool that let multiple people be involved in the creative process of design. We wanted to make communication a visual experience and not just written

What it does

Our platform allows screen-writers to generate scenes in mixed-reality and convert them into a script. Changes to the script are reflected back in a replayable MR experience

How we built it

We used Meshy.ai to generate custom images using voice commands and used LLMs to convert scenes into scripts and vice-versa. Our platform runs on Unity and Quest

Challenges we ran into

Describing a scene as a set of animations or datapoints involved imagination

Accomplishments that we're proud of

We were able to get the end-2-end process of converting movement into scripts, modifying those scripts and loading those changes into the headset in real time

What we learned

Using Rest APIs in Unity for multiple services allowed us to create experiences outside the Unity editor. For example, we were able to combine Hugging Face’s speech to text models to send a request to Meshy to create custom 3D models. We were then able to convert the history of moving objects to text descriptions using an a rest call to chatGPT

What's next for snAIder

Multiplayer for collaboration Implementing more complicated scenes involving many more objects. Expanding the range of objects that can be created and reducing latency. Training a custom LLM to generate more precise descriptions and movements

Built With

Share this project:

Updates