Early Beta

Ship AI systems
20× faster than Claude Code

The platform for teams building AI agents, workflows, and automations
A purpose-built language that AI writes naturally and humans review visually

press enter to start building

Our founder stress-tested AI for

OpenAI OpenAI
Anthropic Anthropic
METR METR
Amazon AGI Amazon AGI

Proof

Your team ships in minutes, not hours

Same AI model builds the same system. WeaveMind finishes in 4 minutes, Claude Code takes 1h 30

Both videos play in sync on a shared timeline

0:00 1:30:00
WeaveMind WeaveMind
4 min

4 min

Time

600

Lines of code

0

Errors

Claude Claude Code
1h 30

1h 30

Time

2.2k

Lines

1

Errors

Built for

Teams that build AI for a living

Whether you're an AI consultancy delivering to enterprise clients or an internal team shipping AI products, WeaveMind is the platform you build on

AI Consultancies & Agencies

Your solutions engineers customize per client in hours, not weeks

White-label WeaveMind under your brand. Let your team focus on the client's problem, ship the next engagement while competitors are still writing boilerplate

Internal AI Teams

Your CEO wants AI. Your team of 5 needs to actually deliver it

Human approval flows, audit trails, stakeholder review gates. Not another engineering project on top of the AI project. It's already in the language

Or you could

Use n8n

AI builder can't handle real complexity. Your team ends up building around it anyway

Build from scratch

3 weeks to build each system, then maintain every single one of them forever

Hire 5 more engineers

$800k/year to reinvent state management and approval flows

Why a new language

AI systems need a language designed for them

LLM calls, databases, APIs, human approvals, browser agents. In Python, each one is a library, a config file, and 50 lines of boilerplate. In Weft, each one is a primitive.

Compact, uniform, type-safe. AI writes it naturally (no hallucinated imports), humans review it visually (the code IS a graph), and the runtime executes it natively

Python app.py
import anthropic, psycopg2, smtplib, os, json
from email.mime.text import MIMEText

client = anthropic.Anthropic(api_key=os.environ["KEY"])
conn = psycopg2.connect(os.environ["DB_URL"])

# Query leads from database...
cur = conn.cursor()
cur.execute("SELECT * FROM leads WHERE ...")
leads = cur.fetchall()

# Qualify each lead with LLM...
response = client.messages.create(
    model="claude-opus-4-20250514",
    max_tokens=1024,
    messages=[{"role": "user", "content": ...}]
)

# Parse JSON, validate, set up SMTP,
# format email, handle errors, send...
# (80+ more lines)
Weft app.wft
db       = PostgresDatabase
qualify  = LlmInference { parseJson: true }
review   = HumanQuery "Review Email"
send     = EmailSend

db.rows    -> qualify.prompt
qualify.draft -> review.body
review.body  -> send.body

database + LLM + human review + email, 7 lines

Outreach review Approve or skip this lead
1 / 3

Lead

Sarah Chen · VP of Engineering Acme Robotics

Subject

Message

Human-in-the-Loop

Approval gates, human reviews, human decisions

Enterprise AI needs human oversight. In Weft, it's one node: HumanQuery. The system pauses, sends a form to the right person, waits hours or days, then resumes exactly where it left off. No webhooks, no polling, no state management.

A browser extension delivers tasks inline. Share a token and anyone joins the review loop without accessing the project. Your client's team can approve actions without ever seeing the underlying system.

Graph-Native Language

Code is a graph

Not a visualization bolted on top, the graph IS the program
AI writes structure, humans see architecture

weft apollo_outreach.wft
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# Apollo Auto Cold Emailing

pick_hypothesis = ExecPython {
  label: "Pick Random Hypothesis"
  in: *hypotheses
  out: hypothesis
  code: <<|
    import random
    return {"hypothesis": random.choice(hypotheses)}
  |>>
}
hypotheses.value -> pick_hypothesis.hypotheses

qualify_llm = LlmInference {
  label: "Qualify Company"
  parseJson: true
}
qualify_prompt.text -> qualify_llm.prompt
llm_json.config -> qualify_llm.config

gate_qual = Gate "Qualified Only"
unpack_qual.qualified -> gate_qual.pass
extract_company.domain -> gate_qual.value

review = HumanQuery {
  label: "Review & Edit Email"
  in: *context, *subject, *body
  out: subject, body, decision_approved
  title: "Review Cold Email"
}
review_context.context_display -> review.context
unpack_draft.subject -> review.subject
unpack_draft.body -> review.body

gate_send = Gate "Approved Only"
review.decision_approved -> gate_send.pass
review.body -> gate_send.value

send_email = EmailSend {
  label: "Send Cold Email"
  fromEmail: "quentin@weavemind.ai"
}
smtp_config.config -> send_email.config
gate_send.value -> send_email.body
Python
pick_hypothesis
LLM
qualify_llm
Gate
gate_qual
Qualified Only
Human
review
waiting on human
Gate
gate_send
Approved Only
Email
send_email

simplified, see the real thing in the video above

same program, two native views

Esc
Suggested
LLM AI
Code Utility
Human Flow
WhatsApp Bridge Infrastructure
Postgres Database Infrastructure

First-Class Primitives

The vocabulary of 2026

LLM call, database, browser agent, cron job, human approval, API endpoint, all primitives. You don't import them, you don't configure them, they exist in the language the way int exists in C

100 primitives and growing, each one typed and validated at compile time. Adding a new primitive takes one Rust file, and the AI learns it immediately because the structure is uniform

Cron
trigger
outreach pipeline
API
enrich
LLM
qualify
outreach actions
LLM
draft
Human
review
Email
send
API
log result

Recursively Foldable

Complexity scales logarithmically

Groups collapse into single nodes. A hundred-node workflow becomes a clean, navigable structure. Your team manages complexity instead of drowning in it.

Previous graph tools (n8n, Make, LangGraph) become spaghetti at scale because they have no recursive scoping. Weft does. And because AI writes the code, the tedium of node-based programming disappears entirely.

Coming Soon

The language is growing

This is version 0.1. Every feature below makes the AI more capable, because the language and the AI co-evolve

Agents as Primitives

Long-lived agents in the graph

An agent is a node that persists, manages its own state, and acts through explicit edges. Every tool call visible in the graph, every decision traceable

LLM flexibility with graph-level observability, no black-box agents, the graph IS the agent's reasoning trace

Agent
iter: 3
research_agent
state: researching · 3 sources found
Web
search
LLM
extract
Memory
store

Verified Blocks

Pre-approved building blocks you customize

A RAG pipeline, a moderation layer, an agent with hallucination watchers. Drop a verified block into your program, customize it, and your system inherits its guarantees

We're partnering with insurers to certify these blocks, so systems built with them get coverage faster. Robustness isn't a best practice you hope developers follow, it's a composable building block in the language

safe_outreach verified
Guard rate_limit
Guard dedup_check
Human approval

Compilation

Weft compiles to a binary

Weft compiles to native Rust. Same performance as hand-written code, zero overhead. The binary provisions its own infrastructure, starts triggers, and runs as a service. One artifact, deploy anywhere.

AI writes a high-level graph. The compiler turns it into systems-level code. Compile once, then run it, serve it as a long-lived process, or manage its infrastructure independently.

terminal
$ weft compile outreach.wft
Parsed 12 nodes, 14 edges
Type-checked all connections
Compiled to native binary
./outreach (4.2 MB, linux/amd64)
$ ./outreach serve
Infrastructure provisioned
Triggers listening
Serving · waiting for events

Built in Rust

Compiler, runtime, type system, AI builder. All Rust

Compiled

Not interpreted

Type-safe

Compile-time validation

Parallel

Native concurrency

Pricing

Usage-based, transparent. Because WeaveMind is efficient (fewer tokens, fewer API calls), the effective cost is lower than building the same system by hand

Usage

At cost + 40%

Pay as you go, no commitment

  • All 100+ primitives
  • AI builder (Tangle)
  • Human-in-the-loop
  • Bring your own API keys

Builder

$100/mo

At cost + 30% · credits roll over

  • Everything in Usage
  • Lower markup (30%)
  • Rolling credits (never expire)
  • Priority support

Enterprise

Custom

For agencies and large teams

  • Everything in Builder
  • White-label deployment
  • Custom primitives
  • Dedicated support & SLA
  • Volume discounts
Contact us

Weft, the compiler, and the runtime are going open source once stable. The language that runs your systems should be auditable, forkable, and yours

Quentin Feuillade--Montixi

Quentin Feuillade--Montixi

Founder & CEO

I spent three years breaking AI systems. Red teaming for OpenAI and Anthropic, capability evaluations at METR, building an AI evaluation startup. I presented an autonomous jailbreaking system at the Paris AI Summit.

The same pattern kept showing up. Teams would build impressive AI prototypes, then spend months making them production-ready. Not because the models were bad. Because the tooling was. Every team was reinventing the same plumbing: state management, human approval flows, error handling, API integrations. In languages designed for a different era.

So I stopped fighting the language and built a new one. Weft is a graph-native, type-safe language where LLMs, humans, and infrastructure are first-class primitives. One structure that AI writes naturally, humans review visually, and the runtime executes natively. The teams building production AI shouldn't be spending their time on plumbing.

Stop building plumbing
Start shipping AI

Enterprise plans available