Inspiration

History education has a visualization problem. Students read about the beaches of Normandy, the chaos of the 1929 stock market crash, or the Roman Forum — but they're reading words about places. The mental model never fully forms. You can't feel the scale of a battlefield from a paragraph, or understand the desperation of Ellis Island immigrants from a sentence.

When we discovered the World Labs Marble API — the ability to generate navigable 3D worlds from a text prompt — the idea was immediate: what if every topic on a history syllabus became a place you could actually walk through? Not a video, not a diagram. A space with depth, atmosphere, and presence. Make history somewhere you go, not something you read.

What It Does

HistoryWalker turns any history course syllabus into a gallery of immersive 3D worlds. Students upload or paste their syllabus, and the app:

Uses Claude AI to extract 8–15 historically rich topics and write a detailed cinematic world prompt for each Scrapes Wikimedia Commons for real historical reference photos per topic Feeds both the prompt and reference image into the World Labs Marble API to generate a navigable 3D environment Delivers a full learning experience inside each world: a Discovery Guide (5–7 specific things to find and observe), real historical photos to compare against the AI world, a Claude-powered historical guide you can chat with in character, and a quiz grounded in visual observation The quiz questions are things like "what material were most buildings made of?" and "describe a scene of daily life you observed." You can only answer them by actually looking at the world — making the 3D environment the lesson, not decoration.

How We Built It

Next.js 14 (App Router) + TypeScript — full-stack, single repo Anthropic Claude API (claude-sonnet) — syllabus parsing, discovery guide + quiz generation, in-world character chat, and open-ended answer grading World Labs Marble API — 3D world generation using Marble 0.1-mini with combined image + text prompts for historically grounded output Cheerio — server-side scraping of Wikimedia Commons for reference images across multiple search queries per topic Server-Sent Events (SSE) — real-time streaming of generation progress to the frontend so students watch their worlds come to life Framer Motion — page transitions, card animations, and the processing timeline React Context + localStorage — full session persistence with no database required The generation pipeline runs worlds in parallel batches of 3, streams status updates live, and unlocks the gallery as soon as 3 worlds are ready — so no one waits for the full batch.

Challenges We Ran Into

The iframe embedding block. Marble worlds are designed to be embedded, but marble.worldlabs.ai sends headers that block iframe rendering in most browsers. We discovered this only after building the full split-pane layout around it. The fix was to default to a rich fallback view with a prominent "Open 3D World" button, keeping the discovery guide, reference images, chat, and quiz fully functional — turning a blocker into a feature.

Image scraping reliability. Wikimedia Commons changes its HTML structure frequently, making CSS selectors brittle. We combined multiple extraction strategies — MediaSearch page scraping plus a Wikipedia API fallback — and gracefully degrade to text-only world generation when no image is found. Nothing crashes; the world still generates.

Prompting for spatial richness. Generic prompts produce generic worlds. Getting Claude to write prompts specifying time of day, light direction, human activity, specific materials, and scale — rather than just "a Roman marketplace" — required careful system prompt design. The difference between "Ellis Island" and a prompt describing the specific smell of thousands of people in a vaulted Beaux Arts hall on a summer morning is the difference between a grey box and a world.

Generation speed vs. quality. Marble 0.1-plus produces stunning results but takes ~5 minutes per world. Marble 0.1-mini hits 30–45 seconds — the difference between a live demo and a loading screen. We used mini everywhere and optimized around it.

Accomplishments That We're Proud Of

The thing we're most proud of is that the quiz can't be cheated by reading Wikipedia. Every question is grounded in visual observation — what you saw, what scale felt like, what the atmosphere communicated. That's only possible because the world exists as a navigable space, not an image or a paragraph. Connecting the Marble world directly to assessment is what makes this an actual learning tool rather than a cool demo.

We're also proud of the reference image comparison — showing real historical photographs side-by-side with the AI-generated world turned out to be one of the strongest features. The contrast between "what it actually looked like" and "what the AI imagined" is itself educational, and students naturally start analyzing the differences.

Finally, the end-to-end pipeline — from raw syllabus text to explorable 3D world with discovery guide, chat, and quiz — runs entirely on two API keys and deploys as a single Next.js app with no database.

What We Learned

The combination of language model and spatial generation is more powerful than either alone. Claude doesn't just extract topics — it writes the cinematic prompt, the discovery guide, the quiz, and the in-character chat persona, creating a coherent learning arc around a world that didn't exist an hour ago. The AI isn't filling a template; it's designing an experience.

We also learned that reference images are load-bearing for Marble output quality. The same text prompt produces a noticeably more historically grounded world when anchored to a real photograph. Feeding a Wikimedia photo of Ellis Island's Great Hall alongside the text prompt results in the arched windows, the wooden benches, and the natural light showing up in the generated world in ways pure text doesn't reliably produce.

What's Next for HistoryWalker

LMS integration — connect to Canvas or Google Classroom so teachers can assign worlds and track student quiz scores directly Teacher dashboard — upload a syllabus, review the extracted topics and prompts before generation, swap in better reference images Multiplayer worlds — drop students into the same Marble world simultaneously for collaborative exploration and shared discovery Student-generated worlds — let students write their own world prompts as an assessment ("describe the environment of the event you researched") and generate a world from their writing Broader curriculum support — the same pipeline works for geography, literature settings, science environments (inside a cell, on the surface of Mars), and architecture history

Built With

  • 14
  • anthropic
  • api
  • cheerio
  • claude
  • commons
  • context
  • css
  • events
  • framer
  • labs
  • marble
  • motion
  • next.js
  • node.js
  • react
  • server-sent
  • tailwind
  • typescript
  • vercel
  • wikimedia
  • world
Share this project:

Updates