// for teams using AI coding assistants

Don't Let Reasoning
Die in the Tab.

Grov automatically captures the context from your private AI sessions and syncs it to a shared team memory.

View on GitHub
terminal
$npm install -g grov
It's the only memory layer you need.
Try:
claude-code + grov

~500 tokens injected • No file exploration • Verified team knowledge

Shared team memory

What one person learns, everyone knows.

dev-1
cloud
dev-2

Memories sync to the cloud. Teammates get relevant context injected automatically—via proxy or MCP.

How Grov saves tokens

Semantic search finds relevant memories, shows lightweight previews, expands only what's needed.

1. Preview: 3-5 memories × ~100 tokens
2. Expand on demand: ~500-1K tokens each
3. Worst case (all 5): ~5-7K tokens
vs. Manual exploration: 50K+ tokens

What gets captured

Not what changed. Why it changed.

memory.json
{
  "goal": "Prevent random user logouts",
  "system_name": "Auth Session",
  "files_touched": ["src/auth/session.ts", "src/middleware/token.ts"],
  "reasoning_trace": [{
    "aspect": "Token Refresh",
    "conclusion": "Refresh window of 5min too short for slow networks",
    "insight": "Race condition when refresh takes longer than window"
  }],
  "decisions": [{
    "choice": "Extend refresh window to 15min",
    "reason": "Handles slow networks without compromising security"
  }],
  "tags": ["auth", "session", "token-refresh"]
}