Inspiration

We wanted to build a world where digital objects aren’t just placed at a GPS coordinate but shaped by multi-dimensional spatial data—orientation, scale, context, and time. A world where anyone can prompt a 3D idea, anchor it instantly into their environment, and have others discover it without scanning or setup. A world-sized 3D UGC canvas powered by AI.

What it does

Reality.fun is a tool that lets anyone:

  1. Prompt → Generate 3D objects instantly
  2. Place them in space with full control • Adjust size, position, rotation • Anchor indoors or outdoors • Persist with time-based memory
  3. Anchor using multi-dimensional spatial signatures (not just a location coordinate)
  4. Discover objects on a world map
  5. Open the camera and instantly see anchored content—no scanning required
  6. Interact with objects (jump, open, trigger actions)

Visitors arriving at the same place can see the same object anchored in the same spatial context, turning the world into an interactive 3D prompt canvas.

How we built it • iOS MR app built with ARKit + RealityKit • Text-to-3D AI generation pipeline • Spatial anchoring using multi-dimensional signatures: geolocation + device orientation + anchor offset + environmental cues • Persistent, time-aware anchor storage • A map UI showing all public 3D creations • Instant AR viewing (no scanning / no marker)

Challenges we ran into • Stabilizing anchors indoors & outdoors with minimal SLAM • Integrating prompt-based 3D generation into mobile performance limits • Designing an intuitive interface for rotation/scale/position editing • Building “instant visibility” without requiring scanning • Handling time-based behaviors for persistence

Accomplishments that we’re proud of • A fully working one-stop AI → 3D → MR creation flow • Multi-dimensional spatial anchoring that works in both indoor & outdoor spaces • “Instant MR visibility” without scanning • Time-based persistence for content memory • A world map where 3D assets appear as part of a shared spatial layer • Interactive/linked 3D objects able to trigger actions

What we learned • Spatial computing needs more than coordinates—it needs contextual, multi-dimensional anchors • Users think naturally in 3D when given simple controls • Time plays a surprisingly important role in spatial content • Removing scanning dramatically improves usability • A prompt-based 3D UGC layer unlocks entirely new behaviors

What’s next for Reality.fun • Expand anchor viewing to Quest, Vision Pro and other XR devices • Add more time-based rules (decay, scheduled visibility) • Creator tools for chaining objects and interactions • A universal space-stamp + time-stamp protocol for XR content • Opening Reality.fun as a spatial UGC platform and SDK

Reality.fun is the beginning of a world where AI makes 3D creation effortless, and reality becomes a persistent, interactive, prompt-generated canvas for everyone.

Built With

Share this project:

Updates