Inspiration

When the members of our group started ideating around the extremely powerful new abilities the Meta Quest platform has in MR, we saw that we had the opportunity to create a learning tool that was truly open and unlimited in its potential, generating the assets, information and interactions live, using Passthrough, Spatial anchors, Interaction SDK, Voice SDK, AI integration with ChatGPT and Dall-E. Using a well known memory technique that creates new pathways in the users brain and gives the memory something to 'hang on to'. We've all seen variations of spatial memory apps, but for the first time we can now create a truly living tool, unlimited in its learning potential.

What it does

Our brain remembers images and locations using spatial memory, which creates vivid, map-like mental images. In contrast, remembering numbers or facts relies on declarative memory, which is more abstract and less tied to visuals. By making associations in these types of memories, such as using a familiar location to remember a list of items we can make abstract information easier to recall by anchoring it to vivid spatial cues.

Mnemo depends on a connection to AI which lets it generate the key data and images the user uses to learn. The interface is conversational, the user interacts directly with the application using Voice SDK, telling the system what information they want to remember. The system then generates data and images that the user must place in their environment, using passthrough AR and spatial anchors. As the user places the images, the data connected to each image is read to the user. When the user has placed all images and heard all texts, they retrace the exact path they took the first time. As they approach the images they have placed, the text is once again read to them. Doing this sequence has created a new mnemonic pathway in the users brain, giving their memories something to hang on to.

How we built it

Our team consists of two programmers, Saad and Alex, and two designers, Klas and Miko. We began by sitting down and challenging the concept from a standpoint of user experience, technical feasibility and actual connection to the memory technique. We found that using the tools available to us, we could not only build a working prototype, but it would actually enable us to use it with any data. The programming team started to set up the project setting up the Unity projects, integrating the Meta toolset and integrating the necessary SDK's such as ChatGPT and Dall-E, as the design team looked at user experience design, UI design and AI prompt design, at the same time preparing the presentation of the concept.

Challenges we ran into

Parsing the response from ChatGPT, making it understand and translate the knowledge into an image generation prompt that would include the visual triggers needed in the image. A lot of the toolset we have been given to build this is brand new, and some UX features we would have liked to include proved impossible to implement, even though we had access to the Meta mentors in the development process. But overall, we are all happily surprised over how well these powerful new technologies worked.

Accomplishments that we're proud of

Mnemo is to our knowledge the first application of its kind. While there has been countless attempts at creating a learning tool that uses the Mnemoloci technique, they have been limited in the information they can use and the topics that they can cover. Using generative AI and LLM we have created a truly limitless tool, and this in 48 hours. Overall, a team consisting of four individuals from different parts of the world, who have never worked together before, can start developing like a well oiled machine, listening to each other and keeping the common goal in view, feels like a fantastic achievement.

What we learned

The possibility use AI as both part of the UX experience but also to generate the data is immensely powerful. We set out to build a prototype, but using AI created a universal learning tool using real data and an infinite library of engaging content. The way that Voice SDK complements UX design in VR also works extremely well. To be able to communicate directly with the application opens a whole new range of possibilities. That said, the ongoing challenge is to learn to speak to AI. A wrongly formulated prompt can give a totally different result than intended. A lot of trial and error as well as a deep knowledge of each AI system employed is key. These are early days.

What's next for Mnemo

Mnemo is already a powerful universal tool. The next step is integrating it with the way students work today by letting students connect Mnemo to their lesson notes by integrating it with Google Docs or even by using the headset camera to scan texts or documents directly. Moreover, to save the learnt lessons in the headset: it includes anchor points and info retrieved from AI. We are also looking closely at the next generation lightweight wearables like smart glasses, which would be a perfect match for this technology.

Team Best Laid Plans was created for this event, and none of the team members know each other from before. Nevertheless, this hackathon has created a strong team that feels like old friends having worked together all their lives. Mnemo is certainly only the first of our best laid plans.

Built With

Share this project:

Updates