Skywalkr

May the knowledge be with you - Walking you through any tutorial, one frame at a time


Mission Control: Sponsors & Technologies

Sponsor Technology Mission Role
Anthropic Claude Sonnet 4.5 AI Reasoning & Storyboard Generation
BrightData Web Unlocker API Web Scraping & Content Retrieval
Fish Audio Text-to-Speech API Voice Narration Generation
ElevenLabs Audio Generation Dynamic Music & Soundscapes
Fetch.AI Agentic AI Workflow Improving and Assisting Claude's Generation
Google Gemini Cartoon Image Generation
AWS S3 Storage, Cognito Hosting, Caching, OAuth
MLH .TECH Domain
Supermemory Memory API Adaptive Learning, Visualizing User Knowledge

Why We Built This

We’ve all been there: staring at a massive Github repo, a deeply technical article, feeling completely lost. But for some of our friends with learning disabilities like ADHD or dyslexia, its overwhelming to ingest all that information at once.

One of our teammates has a sibling who’s an incredibly talented student, but processing information like this can be exhausting for them. They’d spend hours trying to understand something, because information was not being presented in a way their brain could digest. We realized this is a huge barrier for so many people breaking into tech or learning new skills (even outside of learning disabilities). Most documentation assumes you can ingest walls of text, parse through complex examples, and construct a big-picture understanding. But how about for those whose brain works differently? Instead of dense text, how about bite-sized chunks with visual aids?

This is why we built Skywalkr. We wanted to take something that can take anything online (no matter how massive or complex) and transform it into simple, digestible frames that anyone can understand. It’s those “explain like I’m 5” threads on Reddit, but includes storyboards with images and funny voices to make it an engaging process.

The core problem we're solving: Information overload is a genuine accessibility issue. Not everyone can process huge amounts of text the same way, and traditional documentation doesn't support different learning styles or cognitive needs.

Our solution: Skywalkr breaks down complex information into multiple simple frames, including just the essential concepts you need to understand. Each frame has a cartoon visual, a voiced explanation, and you can even choose the narrative style that resonates with you. Want it explained like a pizza restaurant? Done. Need it super simple like you're 5? We got you.


What Skywalkr Actually Does

Paste in any GitHub repo URL or website link, pick how you want it explained (like you're 5 years old, as a pizza restaurant analogy, in college bro speak, whatever works for you), a celebrity voice (Kim K, Darth Vader, or even Spongebob). Skywalkr does the rest. After its generation process, you get this interactive cartoon tutorial that walks you through the key concepts. No 50-page README to go through.

What Makes It Cool

  1. AI figures out the minimum number of frames needed to explain something properly. Not too many (overwhelming), not too few (confusing). Usually lands around 4-8 frames, and fluctuates based on the size of the input.

  2. Authentic voice, from people/characters everyone enjoys! The generation has a real voice (with options to choose) reading the explanation to you. No robotic text-to-speech. This sounds natural and engaging.

  3. Navigate forward and backward whenever you want. Pause, go back, or skip ahead. It's YOUR personalized learning journey.

  4. If an explanation doesn't make sense, hit "Rephrase" and get the same info explained differently. It’s quick but effective.

  5. If someone already made a tutorial for that repo/site with that style, you get it instantly. No waiting for regeneration. Benefits of cache!


How We Built It

Tech Stack

Frontend: Next.js + React + Tailwind CSS
Backend: Next.js API Routes (Serverless)
Deployment: Vercel / AWS Lambda
Storage: AWS S3 + CloudFront CDN

Architecture Flow

Instead of generating expensive, slow videos, we built a stop-motion type concept. Think of it like a flipbook, where you only need the essential frames to tell the story. A person walking doesn't need 60 fps. You just need three images: person in house one, person halfway between houses, person in house two. That's the core concept behind Skywalkr.

User Input → Web Scraping (BrightData) → AI Analysis (Claude + Fetch.AI) → Image Generation (Pollinations / Gemini)
                                      ↓
                         Music Generation (ElevenLabs background music) & Audio Narration (Text-to-speech via Fish.Audio)
                                      ↓
                           Interactive Viewer + S3 Storage

Our Case for Various Tracks

Cal Hacks: Overall

Skywalkr is an AI-powered tutorial generation platform that transforms any codebase or website into engaging, cartoon-style storyboard tutorials with generated images and celebrity voice narration. What sets Skywalkr apart is its composable 7-node pipeline architecture that makes learning not just accessible, but genuinely entertaining. Our system ingests content through BrightData's web scraping MCP, processes it through a multi-agent Fetch.AI workflow that builds knowledge graphs, and leverages Claude's advanced reasoning to generate tutorials in distinct narrative styles, ranging from "Explain Like I'm 5" to "College Frat Guy." The platform features a self-improving AI that critiques and revises its own explanations, ensuring clarity before generating visuals via Pollinations and narration through Fish.Audio's celebrity voices (including Darth Vader, perfectly fitting our Star Wars-themed skywalkr.tech domain). Users can generate clarification for frames on-demand, and customize their learning experience by changing style and voice, all without re-processing the source content thanks to intelligent caching. Built on AWS infrastructure (Cognito, DynamoDB, S3) for scalability, Skywalkr shows technical sophistication, creative problem-solving, along with social impact by democratizing education through playful story visualization.

Cal Hacks: Hacker's Choice

Skywalkr represents the kind of hack that makes you think "I wish this existed when I was learning to code." We took the intimidating world of technical documentation and transformed it into something genuinely fun. Imagine learning React's reconciliation algorithm explained through the lens of a college frat guy, or understanding WebSockets through a pizza restaurant analogy, all narrated by Minnie Mouse or Darth Vader. The technical execution is impressive (composable pipeline architecture, self-critiquing AI, knowledge graph extraction), but what makes Skywalkr special is how it feels to use. The clarification ability feels like having a patient tutor who never gets tired of your questions. We even added a Star Wars themed UI because learning should transport you to another galaxy. This is the hack we built because we wanted to use it ourselves, and we think that authentic need shows in every interaction.

Cal Hacks: Most Creative Hack

Skywalkr's creativity lies in its premise: what if any complex technical thing was as entertaining as a Pixar film? We didn't just add voice-over, but basically built a complete storytelling engine.

Our Critique → Revise loop means the AI literally reads its own work, identifies confusing sections, and rewrites them before you ever see them. The clarification micro-frames feature generates entire mini-tutorials on-the-fly when users press it, using a knowledge graph to understand context and relationships. Want to learn about database indexing? Choose the "car factory" narrative style and watch B-trees explained as assembly line optimization, complete with cartoon factory visuals and Kim K’s narration. We even styled the entire platform with a Star Wars theme for sky“walk”r.tech, because if you're going to help people "walk" through the stars of knowledge, commit to the bit.

Cal Hacks: Greatest Social Impact

The education gap in technology is all about access to understandable information. Skywalkr addresses this head-on by transforming intimidating content into multiple learning styles tailored to different backgrounds and preferences. A high school student struggling with their first programming concepts can use "Explain Like I'm 5" mode with cartoon visuals and simple analogies. A career-switcher from the service industry might finally grasp microservices architecture through the "pizza restaurant" analogy, where each service is a station in the kitchen. Non-native English speakers benefit from our concise, conversational narration style that avoids complex jargon. The self-improving AI (Critique → Revise loop) ensures explanations are clear before reaching users, reducing frustration and dropout rates. The clarification feature means learners never hit a wall. Stuck on one concept? Press the clarify button, explaining just that piece, preserving the flow of learning. By making tutorials entertaining (celebrity voices, cartoon visuals, humor mode), we combat the motivation problem that kills most self-directed learning. Built on AWS infrastructure, Skywalkr can scale to serve millions of learners globally, storing personalized tutorials in DynamoDB and delivering media through S3 CDN. Skywalkr makes the entire internet's worth of codebases and documentation accessible to anyone, regardless of their learning style or background.

Claude: Best Use of Claude

Skywalkr showcases advanced Claude implementation across multiple dimensions that go beyond basic API calls. Our system uses Claude's claude-sonnet-4-5 model in three ways:

First, in the StoryboardDraftNode, Claude transforms raw scraped content into structured tutorial frames, applying style-specific system prompts that inject personality while maintaining pedagogical rigor. This means generating analogies and visual scene descriptions from unstructured technical documentation.

Second, and most innovatively, we implement a self-improving AI loop where Claude critiques its own output: the CritiqueNode has Claude review each generated frame, assigning severity scores (0-3) to clarity issues, identifying jargon, and suggesting improvements; the ReviseNode then feeds these critiques back to Claude, which rewrites unclear sections while maintaining stylistic consistency. This two-pass architecture ensures quality without human intervention—Claude literally edits itself.

Third, the ClarificationNode demonstrates advanced contextual reasoning: when users ask questions mid-tutorial, Claude can look at the knowledge graph (which are the entities and relationships extracted from the scraped website source), surrounding tutorial frames for context, and generates a new caption that can help the user without derailing the main narrative flow. The creative use case extends Claude beyond standard dev workflows as we’re using its reasoning capabilities for narrative styles and humor.

The KnowledgeGraphExtractor has Claude perform entity extraction and relationship mapping from codebases, outputting structured JSON with nodes (files, functions, classes, concepts) and edges (imports, calls, extends, uses). We are essentially using Claude's code understanding to build a semantic map that enables later clarification. Skywalkr uses Claude's reasoning to meet learners where they are, adapting explanations to their level and style preferences.

Amazon/AWS

Skywalkr is architected from the ground up on AWS infrastructure, demonstrating production-grade implementation across three critical services that ensure scalability, security, and global availability.

AWS Cognito powers our entire authentication system with email/password sign-in, email verification workflows, and JWT-based session management, enabling secure, horizontally scalable user access without building custom auth infrastructure. The Cognito integration protects all API routes through middleware that verifies JWT tokens, ensuring only authenticated users can generate and access tutorials.

AWS DynamoDB serves as our primary database with two tables: tutorialize-users (partition key: userId) stores user metadata and preferences, while tutorialize-storyboards (partition key: userId, sort key: sessionId) stores saved tutorial data with complex nested frame structures including narration, visual descriptions, media URLs, and revision metadata. The DynamoDB implementation demonstrates NoSQL expertise with efficient query patterns and a Global Secondary Index on sessionId for cross-user tutorial lookups. AWS S3 handles all media storage, serving as the backbone for our tutorial delivery system, as every generated image (PNG from Stable Diffusion) and audio file (MP3 from Fish.Audio) uploads to S3 with organized paths ({sessionId}/frame{N}.{ext}), configured with public read policies and CORS headers for browser access. The S3 integration includes proper error handling and URL generation for reliable media delivery. Together, these services enable Skywalkr to potentially be a production-ready platform that could serve a larger audience.

Fetch.ai

Skywalkr leverages Fetch.ai's autonomous agent framework to orchestrate a sophisticated multi-agent workflow that collaborates with Claude AI to produce high-quality storyboard tutorials.

Our implementation demonstrates all five judging criteria at a high level:

Functionality & Technical Implementation (25%): We deployed three specialized Fetch.ai agents that communicate and reason in real time: The RepoFetcherAgent chunks large documentation into manageable segments and generates embeddings for semantic understanding; the StructureAnalyzerAgent extracts entities (classes, functions, components) and builds a dependency graph using NetworkX, mapping out how code components relate to each other; and the FlowReasoningAgent analyzes the dependency graph to produce a coherent execution flow that describes how the system works step-by-step. These agents operate asynchronously via the agent_graph.py orchestrator, passing structured data between each other before handing off to Claude for narrative generation. The entire agent pipeline executes via subprocess with timeout handling, demonstrating robust production-quality implementation.

Use of Fetch.ai Technology: Our agents are registered and discoverable through the Agentverse, integrated with the Chat Protocol for ASI:One discoverability. The agent architecture follows Fetch.ai's best practices with the uagents>=0.12.0 SDK, implementing proper message passing and state management. The agents leverage Fetch.ai's autonomous capabilities to make intelligent decisions about content chunking, entity extraction thresholds, and graph traversal strategies without human intervention. Innovation & Creativity (20%): The integration of Fetch.ai agents with BrightData's Web MCP and Claude creates a novel pipeline where autonomous agents handle structured reasoning (graph building, dependency analysis) while Claude handles creative tasks (narrative generation, analogy creation). This division of labor is unconventional—most systems use either agents OR LLMs, not both in complementary roles. The knowledge graph output from our agents powers the clarification feature, where users can ask questions mid-tutorial and receive contextually-aware answers based on the entity relationship map the agents built.

Real-World Impact & Usefulness: By using agents to preprocess and structure technical content, Skywalkr can handle massive codebases that would overwhelm a single LLM call. The agent-based knowledge graph enables advanced features like clarification and adaptive tutoring—the system "understands" how concepts relate because agents mapped those relationships. This solves a critical problem: most AI tutoring systems can't answer "why does this function call that other function?" because they lack structural understanding. Our agent-generated knowledge graph provides exactly that.

User Experience & Presentation: The agent processing happens seamlessly in the background during the SourceLoad and KnowledgeGraph nodes of our pipeline. Users simply input a URL and receive a sophisticated tutorial—they don't see the agent orchestration, but they benefit from the superior content structure it provides. The demo clearly shows how agent-analyzed content produces better tutorials with more logical flow and more accurate clarification responses. Fetch.ai agents transform Skywalkr from a simple scraper-to-AI pipeline into an intelligent content understanding system that reasons about code structure and semantics.

BrightData

Skywalkr's foundation depends entirely on BrightData's Web MCP (Model Context Protocol) to access and parse the vast universe of technical content across the internet. Our implementation showcases BrightData's capabilities at scale: We integrated the BrightData MCP Server as our primary content acquisition layer, using the scrape_as_markdown tool to convert any website, regardless of JavaScript rendering, bot protection, or complex DOM structures. BrightData's infrastructure executes JavaScript, handles authentication flows, bypasses rate limits, and normalizes content from thousands of different website structures into a consistent format our AI pipeline can process.

The SmartScraper adapter in our codebase implements a three-tier fallback strategy: BrightData MCP → Web Unlocker API → basic fetch, ensuring 99%+ success rates across diverse content sources.

Conway: Most Data-Intensive Application

Skywalkr processes large volumes of data with significant computation per record. Each tutorial generation involves: (1) Fetching and parsing 10,000+ characters of website content (HTML, CSS, JavaScript, documentation); (2) Processing through Fetch.ai agents that perform entity extraction, relationship mapping, and graph analysis, computationally expensive operations that involve pattern matching across the entire codebase structure; (3) Knowledge graph aggregation that builds NetworkX graphs with 8-15 nodes and edges, performing graph traversal to determine execution flows; (4) Claude processing that performs deep semantic analysis to transform raw content into structured educational narratives; (5) Critique and revision loops that re-analyze every generated frame; (6) Image generation that processes visual scene descriptions through Stable Diffusion models; (7) Audio generation that performs TTS on every frame's narration. The data pipeline handles diverse mediums: markdown, HTML, Python/JavaScript source code, API documentation, README files, configuration files, and unstructured web content. Each tutorial represents processing 10-50MB of raw data through multiple AI models, graph algorithms, and media generation services, easily qualifying as computationally intensive. The BrightData scraping layer is what makes this volume possible, reliably extracting structured data from any website architecture.

Best Use of MCP

Our BrightData MCP integration demonstrates proper technical use in several ways: The BrightDataMCPClient implements the full MCP protocol with proper tool discovery, parameter validation, and error handling. We leverage MCP's composability by chaining the BrightData scraping tool output directly into our Fetch.ai agent workflow, where the markdown output becomes the input for agent-based entity extraction. The MCP architecture enables a novel use case: by abstracting web scraping behind a standardized interface, our pipeline can seamlessly support future MCP servers (Google Search MCP, GitHub MCP, API documentation MCPs) without rewriting business logic; just swap the MCP server URL. The connection between BrightData MCP → Fetch.ai agents → Claude represents three distinct AI/automation systems communicating easily. The brightdata-mcp-runner.js Node.js script demonstrates cross-runtime MCP integration, showing how MCP can bridge JavaScript (Next.js backend) and Python (Fetch.ai agents) seamlessly. This is MCP as it was meant to be used: composable, protocol-driven automation that chains multiple AI services into a coherent workflow. (BrightData's) MCP is what makes Skywalkr's promise possible.

Fish.Audio

Skywalkr leverages Fish.Audio's TTS API as the voice of our entire tutorial experience, transforming static text explanations into engaging auditory storytelling that makes learning genuinely entertaining.

Technical Novelty: Our implementation goes beyond basic text-to-speech by integrating Fish.Audio into our pipeline. The VoiceGenNode processes every tutorial frame's narration, sending requests to Fish.Audio's API and uploading the resulting MP3s to AWS S3 for CDN delivery. We implemented intelligent error handling and graceful degradation. If audio generation fails for one frame, the system continues processing remaining frames, ensuring partial success rather than total failure. The FishAudioTTSAdapter follows a clean interface design, making it swappable with other TTS providers while maintaining the same pipeline semantics. We support multiple voice IDs, allowing users to select from Fish.Audio's voice library (including celebrity voice options) to personalize their learning experience.

Creativity: The creative leap is using Fish.Audio to make technical education entertaining. We offer celebrity voice options including Darth Vader (perfect for our Star Wars-themed skywalkr.tech domain), Morgan Freeman, and other recognizable voices that transform dry technical content into memorable experiences. Imagine learning Kubernetes explained by Darth Vader: "The pod's failure is complete. The deployment will restart it automatically." The narrative styles we offer (College Frat Guy, Explain Like I'm 5, Pizza Restaurant analogy) become 10x more impactful when delivered with appropriate vocal performance: Fish.Audio's natural-sounding TTS makes the "frat guy" style genuinely sound like your buddy explaining code, not a robotic voice reading text.

Broader Impact: Audio narration dramatically improves accessibility and learning outcomes. Visual learners can read the frame text while auditory learners absorb the narration. Users with dyslexia or reading difficulties can follow along with audio guidance. Commuters can listen to tutorials while driving or on public transit. Non-native English speakers benefit from hearing pronunciation alongside text. The clarification micro-frames feature becomes more powerful with audio: users ask questions and immediately receive spoken explanations, creating a conversational tutoring experience. Studies show multimodal learning (visual + auditory) improves retention by 40% and Fish.Audio's TTS is what enables Skywalkr's multimodal approach.

Use of Fish Audio API: Every tutorial frame includes a call to Fish.Audio's /tts endpoint, converting narration text into MP3 files. We properly handle API rate limits, implement retry logic for transient failures, and manage authentication via API keys. The audio files integrate seamlessly into our React-based tutorial viewer with HTML5 audio controls, auto-play options, and frame synchronization. The rephrase-audio.ts endpoint is where users can request alternative phrasings of frame narration, triggering new Fish.Audio API calls to regenerate audio with different wording while maintaining the same voice. Fish.Audio transforms Skywalkr from a visual tutorial generator into a complete entertainment system for learning, where education feels less like reading documentation and more like watching an engaging documentary narrated by your favorite voice actor.

Most Functional, Novel, and Fun (Janitor.ai)

Functionality: Skywalkr is a complete, working product from end to end. Users authenticate via AWS Cognito, input any URL (GitHub repo or website), select from five narrative styles, and receive a fully-generated tutorial with cartoon images and celebrity voice narration—all within 60 seconds. The 7-node composable pipeline (SourceLoad → KnowledgeGraph → StoryboardDraft → Critique → Revise → ImageGen → VoiceGen) executes reliably with proper error handling at each stage. Users can navigate frames, play audio, ask clarification questions that generate new micro-frames on-the-fly, and remix their entire tutorial (change style, voice, art direction) with one click. The save feature persists tutorials to DynamoDB, allowing users to build a personal library. The remix dock demonstrates sophisticated caching, where changing the narrative style regenerates only the affected pipeline nodes, completing in ~10 seconds versus ~60 seconds for a full generation. Every feature actually works in the demo.

Novelty: The concept is genuinely original. No existing platform combines AI-powered content analysis, multiple narrative styles, cartoon storyboard generation, celebrity voice narration, and interactive clarification into a unified tutorial experience. The Critique → Revise loop where AI edits itself is novel. The knowledge graph-powered clarification that generates contextually-aware micro-frames on-demand is innovative. The instant remix capability with selective node re-execution shows architectural sophistication. The five narrative styles (Explain Like I'm 5, College Frat Guy, Pizza Restaurant Analogy, Car Factory Analogy, Professional) represent a creative approach to meeting diverse learning preferences. The Star Wars theme at skywalkr.tech with Darth Vader narration options commits fully to an entertaining educational vision that nobody else is attempting.

Fun: This is where Skywalkr truly shines. Learning React's reconciliation algorithm from a "college frat guy" perspective is objectively hilarious: "Bro, so like, React doesn't wanna re-render your whole DOM every time, right? That'd be like super inefficient..." Learning WebSockets explained as a pizza restaurant's order flow (customers are clients, kitchen is the server, orders are messages) with cartoon visuals makes complex networking concepts immediately graspable and memorable. The Darth Vader voice option narrating your AWS Lambda tutorial creates genuine entertainment value. The cartoon-like art style makes even the driest infrastructure-as-code documentation visually engaging. And last but not least, the Star Wars UI theme with starfield backgrounds and sci-fi typography makes the entire experience feel like an adventure.

Best Use of AI (Reach Capital)

Skywalkr represents an application of AI to learning, directly addressing Reach Capital's investment thesis around the future of education. The platform leverages AI across multiple layers to fundamentally improve how people acquire knowledge:

AI for Adaptive Learning: Claude's Sonnet 4.5 model powers style-specific tutorial generation, adapting complex technical content to different learning preferences and backgrounds. A high school student learning their first programming language uses "Explain Like I'm 5" mode; a career-switcher from hospitality grasps microservices through the "pizza restaurant" analogy; an experienced developer prefers "professional" mode. Single source content → five distinct learning experiences, each optimized for different audiences. This is AI enabling personalized education at scale.

AI for Quality Assurance: The Critique → Revise loop demonstrates AI's potential for self-improvement in educational content. Traditional tutoring systems generate explanations and hope they're clear. Skywalkr has AI review its own output, identify jargon, spot logical gaps, assign severity scores, and rewrite unclear sections before students ever see them. This creates a higher baseline quality than human-authored content, which rarely undergoes this level of editorial scrutiny.

AI for Interactive Tutoring: The clarification system uses AI to answer student questions contextually. When a learner asks "How does JSX work?" mid-React tutorial, the system generates 1-3 micro-frames that explain JSX specifically, using the knowledge graph to understand which concepts relate, which frame to insert the explanation after, and how to maintain narrative continuity. This mimics one-on-one tutoring where teachers adapt to student confusion in real-time—something impossible with static content.

AI for Content Creation: Fetch.ai agents perform entity extraction and knowledge graph construction, mapping how code components relate. Stable Diffusion generates cartoon visuals that illustrate abstract concepts. Fish.Audio TTS creates natural-sounding narration with celebrity voices. This multi-AI orchestration (agents for reasoning, LLMs for narrative, diffusion models for visuals, TTS for audio) showcases how different AI modalities can compose into a coherent educational product.

AI for Accessibility: The system automatically generates multiple representations of the same content (text, visuals, audio narration) enabling multimodal learning that serves visual, auditory, and reading/writing learners equally. The concise, conversational narration style (enforced through Claude prompts) reduces cognitive load for non-native speakers and users with learning differences.

Transforming Work: Technical documentation is work. Reading 40 pages of AWS docs to understand Lambda cold starts is work. Skywalkr transforms that work into entertainment, because learning becomes something you'd choose to do, not something you have to force yourself through. This mindset shift is what drives self-directed learning success.

Impact Potential: Skywalkr can ingest and transform the entire internet's worth of content:every GitHub repository, every documentation site, every Investopedia article. The AWS infrastructure (Cognito, DynamoDB, S3) enables scale early. The caching feature, powered by AWS, allows institutions to build curated learning paths. The impact is democratizing access to technical knowledge by making it enjoyable for anyone, regardless of learning style or background. Reach Capital invests in founders reimagining education through technology. Skywalkr represents that vision. AI doesn't replace teachers, it makes personalized, adaptive, engaging education accessible to billions.

Best .tech Domain Name

skywalkr.tech perfectly captures our team’s drive, and represents who we are as passionate developers. The product’s name “Skywalkr” is a creative play on “Skywalker” from Star Wars, fitting our complete Star Wars-themed UI with starfield backgrounds, sci-fi typography, and Darth Vader voice narration options. It’s inspired from the long journey Anakin Skywalker went through to train to become a Jedi. Users, similarly, “walk” through steps of extremely technical concepts with a guided journey. We want Skywalkr to become the one stop shop for people to fully digest information they don’t understand, and we believe this name will allow us to accomplish exactly that. By integrating our own SSL certificate from CloudFlare to ensure site security & protection, we bring access to this platform to more students. We fully committed to the theme throughout the product, something we think students will appreciate.

Crater: Most Composable, Iterative, and Playful Design

Skywalkr's architecture and user experience show composability, iteration, and playfulness at every level, creating a product that is easy to use and effective.

Composability: The 7-node pipeline architecture is deeply composable—each node (SourceLoad, KnowledgeGraph, StoryboardDraft, Critique, Revise, ImageGen, VoiceGen) implements the same PipelineNode interface, making them independently swappable, testable, and chainable. Want to add sentiment analysis? Insert a SentimentNode between KnowledgeGraph and StoryboardDraft—no changes to other nodes required. The adapter pattern for external services (LLMAdapter, ScraperAdapter, ImageAdapter, TTSAdapter) means swapping Claude for GPT-4 or Pollinations for DALL-E requires changing a single line of code. The PipelineExecutor cache system composes with node definitions—nodes declare isCacheable: true/false and the executor handles the rest. The knowledge graph output from early nodes composes into later clarification features. Data flows through the pipeline, each node building on previous outputs. This is an elastic architecture where components snap together in novel combinations.

Iterative & Self-Improving: The Critique → Revise loop is inherently iterative. The AI drafts frames, reviews them for clarity issues, and revises unclear sections. This is a feedback loop that improves quality automatically. The clarification system allows iterative depth—users can keep pressing a simple button, generating new captions that stack on previous explanations, building understanding layer by layer. Playful Design: The five narrative styles inject playfulness—learning AWS Lambda from a "college frat guy" is objectively fun: "Yo, so Lambda is like, you don't even need a server bro, AWS just runs your function when stuff happens." The celebrity voice options (Darth Vader narrating Kubernetes, Morgan Freeman explaining React) transform dry technical content into entertainment. The cartoon art style with visual scenes like "a bustling pizza restaurant kitchen with order tickets flying" makes abstract concepts concrete and memorable. The humor toggle in the remix dock adds jokes and pop culture references to technical explanations. The Star Wars UI theme at skywalkr.tech, with starfield backgrounds, sci-fi fonts, hologram-styled tutorial frames, makes using the product feel like a game rather than a learning management system.

Adaptability & Usefulness: The system adapts to users in unexpected ways. The same source content transforms into five completely different learning experiences based on style selection. The knowledge graphs (the one generated by the Fetch.Ai agents for Claude to better generate scripts AND the one on the backend generated by Supermemory to see the user’s learning ability) enables the system to understand relations and adapt to user confusion. The cache-aware part of the project makes the system feel instant and responsive, adapting to preferences without lag. The graceful degradation (if image generation fails, frames still display with narration) means the system adapts to partial failures.

And, in our opinion, these are our more "Wow, How Did They Do That?" parts of the project:

Asking a clarification question and seeing new captions generate and properly insert itself mid-tutorial with full audio and flow maintained

Having customizable settings, changing style + voice, clicking apply, and seeing the entire tutorial be different

Realizing the same tutorial can be Darth Vader explaining it seriously OR a frat guy making jokes—and switching between them is one click

The smooth frame navigation where images and audio sync perfectly, despite being generated asynchronously

The Supermemory knowledge graph visualization (hypothetical feature we could demo) showing how the system "understands" user learning needs

Crater's criteria ask for products that build on themselves and adapt in unexpected ways. Skywalkr's pipeline doesn't just execute. Skywalkr learns (critique loop), remembers (caching), adapts (remix), and grows (clarification micro-frames). This is composability, iteration, and playfulness as core design principles.

ElevenLabs

While our primary voice generation utilizes Fish.Audio's API for narration, ElevenLabs powers the entire ambient audio experience that makes Skywalkr tutorials feel cinematic rather than instructional. We use ElevenLabs' music generator to create dynamic background music and ambient soundscapes that adapt to each tutorial's narrative style. This audio layer transforms Skywalkr from a visual + narration experience into a fully immersive learning environment where every interaction is reinforced with audio feedback. The entire star wars experience is generated through ElevenLabs. We thought through every user interaction with Skywalkr, from scrolling through the stars, then going through hyperspace, and arriving at your destination (“storyboard”). All of the audio heard comes from ElevenLabs! The best part of all of this is the music doesn’t sound robotic whatsoever. The sound effects feel professionally produced, making it an entire audio experience that makes educational content feel like entertainment.

Snapdev

Innovation and Creativity (25%): Skywalkr innovates by transforming static documentation into interactive, narrative-driven learning experiences. The composable 7-node pipeline architecture with self-improving AI (Critique → Revise loop) is architecturally novel. The five narrative styles and celebrity voice options represent a creative approach to educational personalization. The clarification micro-frames that generate on-demand based on knowledge graph context showcase innovation in interactive tutoring. The Star Wars themed UI with Darth Vader narration commits to a creative vision of making learning entertaining.

Impact and Usefulness (25%): Skywalkr addresses a critical gap—technical documentation is comprehensive but incomprehensible for many learners. By offering multiple narrative styles, cartoon visuals, and audio narration, we make the entire internet's technical content accessible to diverse learning preferences. The self-improving AI reduces frustration by catching unclear explanations before users see them. The clarification feature prevents learners from hitting walls and giving up. Built on AWS (Cognito, DynamoDB, S3), the platform scales globally. The saved tutorial library enables institutional use for training programs. Real-world impact: faster onboarding for new developers, reduced barriers to career switching into tech, improved retention in self-directed learning.

User Experience (25%): The UX is polished and intuitive. Users authenticate seamlessly via Cognito, input a URL with style selection, and watch the pipeline progress through visual indicators. The tutorial viewer features smooth frame navigation, synchronized audio playback, and clear CTAs for clarification and saving. The remix dock appears as a bottom-right floating control panel with instant preview of changes before applying. The clarification checkpoints appear as unobtrusive purple prompts that expand on click. The revision badges provide transparency (" Revised for clarity") without disrupting flow. The Star Wars theme is consistent across every screen without feeling gimmicky. Error states are handled gracefully—if image generation fails, frames display with narration and a placeholder. The mobile experience is responsive with touch-friendly controls.

Communication and Presentation (25%): Our demo tells a clear story: (1) Problem statement—technical learning is boring and one-size-fits-all; (2) Solution walkthrough—watch us transform articles into different tutorials; (3) Key features—rephrasing, self-improvement (Claude’s backend); (4) Technical architecture—with the 7-node pipeline with visuals; (5) Impact potential—scale to millions of learners. The presentation emphasizes live product—everything shown works in the demo, no smoke and mirrors. Live Product Status: Skywalkr is fully functional. Users can sign up, and generate tutorials from any URL.

Built With

  • brightdata
  • claude
  • elevenlabs
  • fishaudio
  • node.js
  • react
  • stable-diffusion
  • tailwind
Share this project:

Updates