ultrathink, but fast.

Lightning fast, turnkey indexing, search, and full codebase understanding for AI coding agents.

Book a demo
quickstart

1. Install the MCP server

$ uv tool install ultrasync-mcp

2. Add to your MCP config

{
"ultrasync": {
"type": "stdio",
"command": "uv",
"args": [
"tool",
"run",
"--from",
"ultrasync-mcp",
"ultrasync",
"mcp"
]
}
}

3. Index your project

$ uv tool run --from ultrasync-mcp ultrasync index

Free forever. No account required.

Zero configuration

Loading demo...
  • One call. Full context.

    Purpose-built indexing returns precisely what your agent needs. No recursive searches, no wasted tokens.

    agent task: find session authentication logic

    with ultrasync mcp
    mcp_ultrasync_search
    running0.0s
    without ultrasync
    Analyzing codebase structure...
    running0.0s

Built for agent workflows

Every feature designed to reduce token waste and give your agent the context it needs, instantly.

Persistent memory

Your agent remembers decisions, constraints, and bug findings from prior sessions. No more re-explaining context.

search|
Found 3 relevant memories from prior sessions:
Context from prior sessions automatically included

Auto-classification

Context types detected via pattern matching. Auth, frontend, backend, k8s, billing - no LLM required.

Scanning codebase...0%
Detected contexts (283 files classified):
Extracted insights:
TODO23
FIXME7
HACK3
SECURITY2
Zero LLM calls required~1ms lookups

Interactive explorer

Browse your indexed codebase from a TUI or the web. Navigate files, preview symbols, search semantically.

ultrasyncvoyager
Files
2,368 files indexed14,892 symbols

Not just prompts in a trench coat

Intentionally designed for performance, developer experience, and the enterprise by developers with deadlines.

Common approach ultrasync
Setup
Multiple tools, hooks, and prompt engineering required
Single MCP server handles everything—zero configuration
Search
BM25 keyword search OR semantic vectors, not both
Hybrid search fuses BM25 + semantic via RRF for best precision/recall
Indexing
Either lazy on-demand OR expensive upfront batch indexing
JIT + AOT dual-layer: fast lookups on indexed files, lazy indexing for new ones
Classification
LLM calls to classify file context (auth, frontend, etc.)
Hyperscan pattern matching classifies 24 context types at index time—zero LLM cost
Learning
Manual re-indexing when files change or searches fail
Transcript watcher auto-indexes files your agent touches; failed searches trigger learning
Memory
Raw transcript storage or basic key-value persistence
Structured taxonomy extracts decisions, constraints, pitfalls—searchable by category

Under the hood: Rust-powered AOT index with O(1) hash lookups, Hyperscan DFA compilation for bulk regex at network speed, LMDB for persistent graph memory, and tantivy for BM25 with code-aware tokenization.

Instant AOT lookups24+ context types13+ insight patternszero prompt stuffing
private beta

ultrasync for teams

Enterprise-grade semantic indexing with centralized memory management. Keep your team's context private, synchronized, and instantly accessible.

Private & secure

Self-hosted deployment keeps your codebase context on your infrastructure. No data leaves your network.

Shared context

Team-wide conventions, architectural decisions, and institutional knowledge accessible to every developer.

Collaborative memory

Debug findings, design decisions, and constraints persist across the entire team—not just individual sessions.

Central management

Administer conventions, manage access, and monitor usage across your organization from a single dashboard.

Interested in ultrasync for your team?

Get early access and help shape the enterprise roadmap.

Get in touch