Describe what you want, the AI builds it, you guide it with your expertise
An autonomous builder powered by a language designed for AI systems
press enter to start building
Our founder stress-tested AI for
Proof
Same AI model builds the same system. WeaveMind finishes in 4 minutes, Claude Code takes 1h 30
Both videos play in sync on a shared timeline
WeaveMind4 min
Time
600
Lines of code
0
Errors
1h 30
Time
2.2k
Lines
1
Errors
Built for
Whether you're delivering AI to enterprise clients or shipping it inside your own company
AI Consultancies & Agencies
Your solutions engineers customize per client in hours, not weeks
Describe your client's workflow, the AI builder assembles it. Your solutions engineers customize and iterate. White-label it under your brand
Internal AI Teams
Your CEO wants AI. Your team of 5 needs to actually deliver it
You don't need AI engineers. Describe what you need, use your domain knowledge to guide the builder. It handles the architecture, you handle the expertise
Or you could
Use n8n
AI builder breaks past 3 nodes. Your team ends up building around it anyway
Build from scratch
3 weeks to build each system, then maintain every single one of them forever
Hire 5 more engineers
$800k/year and they'll still spend half their time on integration code
Why a new language
LLM calls, databases, APIs, human decisions, browser agents. In Python, each one is a library and a config file. In Weft, each one is a single primitive. That's why the AI builder is so fast: it's assembling primitives, instead of generating boilerplate code
Compact, uniform, type-safe. The AI writes it correctly because the structure is constrained. You review it visually because the code IS a graph
import anthropic, psycopg2, smtplib, os, json
from email.mime.text import MIMEText
client = anthropic.Anthropic(api_key=os.environ["KEY"])
conn = psycopg2.connect(os.environ["DB_URL"])
# Query leads from database...
cur = conn.cursor()
cur.execute("SELECT * FROM leads WHERE ...")
leads = cur.fetchall()
# Qualify each lead with LLM...
response = client.messages.create(
model="claude-opus-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": ...}]
)
# Parse JSON, validate, set up SMTP,
# format email, handle errors, send...
# (80+ more lines)db = PostgresDatabase
qualify = LlmInference { parseJson: true }
review = HumanQuery "Review Email"
send = EmailSend
db.rows -> qualify.prompt
qualify.draft -> review.body
review.body -> send.bodydatabase + LLM + human review + email, 7 lines
Lead
Subject
Message
Human-in-the-Loop
The system pauses, sends a form to the right person, waits hours or days, then resumes exactly where it left off. Review a draft, approve a decision, edit a response. One node in the graph, no engineering required
A browser extension delivers tasks inline. Share a link and anyone can review without accessing the project. Your client's team sees the form, not the system behind it
Graph-Native Language
Not a visualization bolted on top, the graph IS the program
AI writes structure, humans see architecture
# Apollo Auto Cold Emailing
pick_hypothesis = ExecPython {
label: "Pick Random Hypothesis"
in: *hypotheses
out: hypothesis
code: <<|
import random
return {"hypothesis": random.choice(hypotheses)}
|>>
}
hypotheses.value -> pick_hypothesis.hypotheses
qualify_llm = LlmInference {
label: "Qualify Company"
parseJson: true
}
qualify_prompt.text -> qualify_llm.prompt
llm_json.config -> qualify_llm.config
gate_qual = Gate "Qualified Only"
unpack_qual.qualified -> gate_qual.pass
extract_company.domain -> gate_qual.value
review = HumanQuery {
label: "Review & Edit Email"
in: *context, *subject, *body
out: subject, body, decision_approved
title: "Review Cold Email"
}
review_context.context_display -> review.context
unpack_draft.subject -> review.subject
unpack_draft.body -> review.body
gate_send = Gate "Approved Only"
review.decision_approved -> gate_send.pass
review.body -> gate_send.value
send_email = EmailSend {
label: "Send Cold Email"
fromEmail: "quentin@weavemind.ai"
}
smtp_config.config -> send_email.config
gate_send.value -> send_email.bodysimplified, see the real thing in the video above
same program, two native views
First-Class Primitives
LLM call, database, browser agent, cron job, human approval, API endpoint, all primitives. You don't import them, you don't configure them, they exist in the language the way int exists in C
100 primitives and growing, each one typed and validated at compile time. Adding a new primitive takes one Rust file, and the AI learns it immediately because the structure is uniform
Recursively Foldable
Groups collapse into single nodes. A hundred-node workflow becomes a clean, navigable structure. Your team manages complexity instead of drowning in it.
Previous graph tools (n8n, Make, LangGraph) become spaghetti at scale because they have no recursive scoping. Weft does. And because AI writes the code, the tedium of node-based programming disappears entirely.
This is version 0.1. Every feature below makes the AI more capable, because the language and the AI co-evolve
Agents as Primitives
An agent is a node that persists, manages its own state, and acts through explicit edges. Every tool call visible in the graph, every decision traceable
LLM flexibility with graph-level observability, no black-box agents, the graph IS the agent's reasoning trace
Verified Blocks
A RAG pipeline, a moderation layer, an agent with hallucination watchers. Drop a verified block into your program, customize it, and your system inherits its guarantees
We're partnering with insurers to certify these blocks, so systems built with them get coverage faster. Robustness isn't a best practice you hope developers follow, it's a composable building block in the language
Compilation
Weft compiles to native Rust. Same performance as hand-written code, zero overhead. The binary provisions its own infrastructure, starts triggers, and runs as a service. One artifact, deploy anywhere.
AI writes a high-level graph. The compiler turns it into systems-level code. Compile once, then run it, serve it as a long-lived process, or manage its infrastructure independently.
Compiler, runtime, type system, AI builder. All Rust
Compiled
Not interpreted
Type-safe
Compile-time validation
Parallel
Native concurrency
We pass down our volume discounts on LLMs and every other service we use.
Our AI builder is efficient enough that even with the margin, you pay less than doing it yourself
Usage
At cost + 60%
Pay as you go, no commitment
Starter
$20/mo
At cost + 35% · $20 credits/mo
Builder
$100/mo
At cost + 20% · $100 credits/mo
Enterprise
Custom
For agencies and large teams
Weft, the compiler, and the runtime are going open source once stable. The language that runs your systems should be auditable, forkable, and yours
I spent three years breaking AI systems. Red teaming for OpenAI and Anthropic, capability evaluations at METR, building an AI evaluation startup. I presented an autonomous jailbreaking system at the Paris AI Summit.
The same pattern kept showing up. Teams would build impressive AI prototypes, then spend months making them production-ready. Not because the models were bad. Because the tools were. Every team was solving the same problems from scratch: connecting LLMs to real systems, adding human oversight, making things reliable. In languages that weren't designed for any of it.
So I stopped fighting the language and built a new one. Weft is a graph-native, type-safe language where LLMs, humans, and infrastructure are first-class primitives. One structure that AI writes naturally, humans review visually, and the runtime executes natively. The teams building production AI should describe what they want and get a working system, not fight their tools.