Inspiration
Nearly two years ago, I started a chess program at a retirement home to share my passion for the game. And to this day, one resident, Linda McGregor, taught me something I'll never forget and it is not about chess, but about life. She told me that memories are genuinely her most prized possessions. And I realized that this integral piece to her character is fragile, especially as her husband suffers from memory loss. They don’t just lose names and dates… they lose themselves. So our team built Memory Lane: a way for everyday people to easily preserve the stories that shape who we are.
What it does
Memory Lane is a web app that helps people preserve, relive, and share their most cherished memories as explorable 3D spaces. Users upload videos of meaningful objects and spaces (like a childhood bedroom or a grandparent’s kitchen), and we use gaussian splatting to reconstruct immersive environments. Users can also share stories through natural language, which are understood by a Gemini-powered AI agent, which could then remind them later. Family and friends can be invited to comment, adding collective memories to each scene.
How we built it and Challenges we faced
Our initial plan was to train gaussian splats on a home server using our own pipeline. However, with a consumer-grade GPU, it would take 30+ hours to process a single 2–3 minute video, making it HIGHLY impractical within a hackathon timeframe and budget. To overcome this, we pivoted to using Vid2Scene, which allowed faster rendering but came with trade-offs in quality and customization. Unfortunately, Vid2Scene doesn't offer a public API, so we built a custom wrapper service that exposes an endpoint. When users submit an .mp4 video, the service launches an automated browser session (via Selenium), uploads the file, downloads the resulting .ply point cloud file, and returns it to the user. This hacky solution allowed us to automate a tool not designed for API access. Additionally, none of us had prior experience with Twelve Labs so we had to quickly onboard and experiment the software under time pressure, attending their workshops and learning from mentors. The gsplat.js package was also incredibly painful to work with. We wanted to have an AR experience however faced many challenges when trying to accurately track the position of the phone and integrating it with gsplat's custom rendering components.
Accomplishments that we're proud of
None of us had met in person before this hackathon. We came together from different backgrounds, met just days before on Discord, and quickly aligned on our strengths, weaknesses, and shared vision. As the late nights passed, we maintained a (relatively) positive spirit and real teamwork, something we’ll carry beyond the project.
What we learned
This weekend was full of firsts for our team. We dove headfirst into WebXR, Gaussian Splatting (G-Splat) and Twelve Labs which were completely new to all of us and came with steep learning curves.
What's next for Memory Lane
We see Memory Lane as the beginning of a much bigger vision. We want to make the experience even more immersive by incorporating VR support, so users can truly step back into the spaces that matter most. We also plan to transition to Luma AI (or similar APIs) for faster, higher-quality photorealistic scene generation, removing our dependency on tools like Vid2Scene and gaining more control over the rendering pipeline (we chose not to use Luma AI due to the high costs). We would also love to allow users to be able to click on individual objects in a room to reveal stories, audio clips, or descriptions tied to that specific item. For older users, we want to add a calm, simplified walkthrough mode with voice narration and minimal interaction required as well as refining the UI to make it more visually appealing.
Built With
- css
- flask
- gemini
- javascript
- next.js
- python
- react
- tailwind
- three.js
- twelvelabs
- typescript
- webxr
Log in or sign up for Devpost to join the conversation.