Inspiration
When people are asked who they are, they often reach for a name, a title, a label. Identity isn’t made of facts, but the colors you gravitate toward, the corners you return to, the objects you keep the world you build around you.
We built Déjà View around that idea: if your room is part of your identity, then the things you discover online shouldn’t stay trapped in a “saved” folder. They should step into your space, be felt, noticed, and chosen.
What it does:
Déjà View turns online inspiration into a living virtual room.
Scan your space Record a 45–90s video (or LiDAR scan) → we generate a 3D room.
Collect inspiration: Save Pinterest posts to a folder → Déjà View pulls them in.
Bring it into your room: We identify the item, generate a realistic 2D render, convert it into a 3D object, and place it naturally in your space.
Decide with context: Tap the object to favourite, add to cart, or dislike.
Shop instantly: We generate tags and surface real matching products on Shopify.
Resonance Test: Items drift daily for 3 days, if you never interact, they disappear.
How we built it
We built a multi-step pipeline that connects discovery → creation → placement → purchase:
Next.js + TypeScript for the web app
Tailwind + custom tokens + Framer + GSAP for a clean, animated UX
Three.js for the 3D room viewer and object rendering
Clerk for authentication
MongoDB Atlas for app state + metadata
Cloudflare R2 for storing generated assets (2D images + 3D models)
Pinterest API for ingestion → Gemini API to identify the object + generate tags/description
Nano Banana Pro to generate a realistic 2D product render
Trellis to convert the 2D render into a 3D object
Gemini again to infer a natural placement in the room
Shopify Storefront API to display similar real products and enable checkout
Challenges we ran into
Video/LiDAR → 3D room conversion: turning casual room capture into a usable interior space was harder than expected.
Accurate 3D model generation from Pinterest images: chaining Pinterest → AI prompt → 2D render → 3D conversion meant every step needed guardrails and fallbacks.
Placement that “makes sense”: it’s not enough to place an object somewhere vali, it needs to feel natural in context (floor vs surface, near walls, not clipping, not floating).
Reliability across many APIs: this project is basically a relay race; if one handoff fails, everything fails, so we built the pipeline to be resilient.
Accomplishments that we're proud of
We got the full pipeline working: Pinterest → Gemini → Nano Banana (2D) → Trellis (3D) → placed in-room.
We built a polished, modern experience with micro-interactions and motion that makes the product feel production-ready.
We connected the identity experience to a real outcome through Shopify purchase flow.
What we learned
Building “multi-step, multi-API” systems is less about any single model, it’s about designing clean interfaces between steps, adding fallbacks, and pushing through integration friction.
Identity changes over time. We learned that good design helps people make choices on purpose, so what they keep actually reflects who they are.
What's next for Deja View
Multi-room stitching: connect multiple scans into one continuous apartment/home.
Aesthetic understanding: recommend items based on the room’s style and the user’s patterns.
Expand beyond interiors: clothing and wearable identity using the same “discover → visualize → choose” loop.
VR mode: step into your evolving space and curate it in full immersion.


Log in or sign up for Devpost to join the conversation.