Inspiration
Most AI tutors today are basically fancy search engines: you ask, they explain, you read, and then you forget. The real problem is not access to information. It is that passive consumption does not build understanding.
The best teachers do not just lecture. They ask questions. They make you think.
We wanted to build something that works like the best human tutors do: never giving away the answer until the learner has had a real chance to arrive there themselves.
What it does
Know-de is a Socratic AI tutor that generates a personalized course on any topic, then teaches it through guided questioning instead of direct explanation.
Type any topic. Pick your level. Start learning.
The platform generates a structured 2-chapter syllabus with 3–4 concepts per chapter, then teaches each concept through a 4-phase loop:
1. Intro
The tutor opens with a real-world problem or failure case, without naming the concept directly. The learner is placed into the situation first.
2. Socratic dialogue
The tutor asks questions in a sequence mapped to WHY → WHAT → HOW → APPLY.
It adapts to the learner’s responses in real time:
- going deeper when the learner is on track
- narrowing the question when the learner is stuck
- diagnosing where the learner is stuck, whether in terminology, scenario understanding, or reasoning
3. Direct explanation
Only after the learner exhausts the Socratic rounds does the tutor provide a clear explanation.
4. Teach-back
Using the Feynman technique, the learner explains the concept back in their own words.
Additional features
- Dynamic Knowledge Graph
An LLM-generated visual map of concepts and their relationships, with the active concept highlighted during progress - Three difficulty levels
Novice, Intermediate, and Advanced, each with distinct questioning styles and vocabulary rules - Image upload
Learners can upload a diagram or photo during the conversation, and a multimodal model incorporates it into the lesson - Score system
Points for strong answers, partial credit for partial understanding, and bonus points for successful teach-back
How we built it
Backend
We built the backend with FastAPI and a custom state machine, avoiding heavier orchestration frameworks. Each conversation is represented as a LearningState that tracks:
- phase
- chapter
- concept
- turn count
- message history
The teacher itself is powered by a single LLM, with carefully differentiated prompts for each teaching phase and learner level.
Syllabus generation
We use a two-pass generation pipeline:
- Generate the chapter and concept structure
- Enrich each concept in parallel with:
- definition
- scenario
- example
- learning goal
- seed questions
Knowledge graph
A separate LLM call generates:
- concept nodes
- grid positions
- typed edges between concepts
The frontend converts this into an SVG-based graph with:
- chapter-colored nodes
- hover tooltips
- smooth connecting curves
Vision
Image analysis is handled through GPT-4o-mini, decoupled from the main LLM provider Lava.
Frontend
The frontend uses Next.js App Router with SSE streaming for real-time tutor responses.
It also includes:
- score popups
- sound effects
- chapter sidebar navigation
- term preview on hover
LLM
We use K2 Think V2 for all core generation tasks.
Challenges we ran into
The hardest part was not building the system itself. It was making the tutor behave the right way.
Getting an LLM to remain truly Socratic is surprisingly difficult. Common failure modes included:
- praising every answer, even wrong ones
- revealing the full explanation the moment a learner says “I don’t know”
- ignoring stuck-handling instructions when final-turn logic is also present
- reacting to the previous correct answer instead of the learner’s current confusion
We addressed these issues through:
- more careful prompt architecture
- separation of conflicting instructions into different prompt sections
- conditional suppression of rules based on state
- stuck detection at the orchestration level instead of relying entirely on the model
Accomplishments that we're proud of
- Built a tutor that genuinely does not give away the answer too early
- Designed teaching behavior that meaningfully changes across novice, intermediate, and advanced levels
- Generated a concept knowledge graph from scratch for any arbitrary topic in seconds
- Created a clean SSE streaming architecture with graceful error handling
What we learned
Prompt engineering for behavioral constraints is very different from prompt engineering for content generation.
Getting an LLM to not do something — not praise, not explain too early, not advance incorrectly — often takes more precision than getting it to generate good content in the first place.
What's next for Know-de
- Turn WHY / WHAT / HOW / APPLY into explicit sub-phases in the state machine
- Add spaced repetition to review earlier concepts before introducing new ones
- Export learning notes as a summary after course completion
- Improve multi-turn stuck diagnosis by asking learners what they are confused about before deciding how to help
Built With
- fastapi
- k2-think-v2
- lava
- python
- zed


Log in or sign up for Devpost to join the conversation.