You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Knowledge injected into existing context — configurable, preloadable, auto-discoverable, with context forking and progressive disclosure · Official Skills
Cloud automation on Anthropic infrastructure — scheduled, API-triggered, or GitHub event-driven tasks that run even when your machine is off · Desktop Tasks
Flicker-free alt-screen rendering with mouse support, stable memory, and in-app scrolling — /tui fullscreen is the canonical toggle (v2.1.110+); env var is the legacy path
Background safety classifier replaces manual permission prompts — Claude decides what's safe while blocking prompt injection and risky escalations · --enable-auto-mode flag removed (v2.1.111); Max subscribers default with Opus 4.7 · Blog
Build production AI agents with Claude Code as a library — Python and TypeScript SDKs with built-in tools, hooks, subagents, and MCP · Quickstart · Examples
/loop runs prompts locally on a recurring schedule (up to 7 days) — supports self-pacing where Claude picks its own interval via the Monitor tool · /schedule runs prompts in the cloud on Anthropic infrastructure — works even when your machine is off · Announcement
Isolated git branches for parallel development — each agent gets its own working copy; subagents can run in temporary worktrees via isolation: "worktree" frontmatter (v2.1.106+)
See orchestration-workflow for implementation details of Command → Agent → Skill pattern.
claude
/weather-orchestrator
⚙️ DEVELOPMENT WORKFLOWS
All major workflows converge on the same architectural pattern: Research → Plan → Execute → Review → Ship
challenge Claude — "grill me on these changes and don't make a PR until I pass your test." or "prove to me this works" and have Claude diff between main and your branch 🚫👶
after a mediocre fix — "knowing everything you know now, scrap this and implement the elegant solution" 🚫👶
Claude fixes most bugs by itself — paste the bug, say "fix", don't micromanage how 🚫👶
start with a minimal spec or prompt and ask Claude to interview you using AskUserQuestion tool, then make a new session to execute the spec
always make a phase-wise gated plan, with each phase having multiple tests (unit, automation, integration)
spin up a second Claude to review your plan as a staff engineer, or use cross-model for review
write detailed specs and reduce ambiguity before handing work off — the more specific you are, the better the output
prototype > PRD — build 20-30 versions instead of writing specs, the cost of building is low so take many shots
■ Context (5)
Tip
Source
context rot kicks in around ~300-400k tokens on the 1M context model — don't let sessions drift past that for intelligence-sensitive work
avoid agent dumb zone, do manual /compact at max 50%. Use /clear to reset context mid-session if switching to a new task
rewind > correct — double-Esc or /rewind back to before the failed attempt and re-prompt with what you learned, instead of leaving failed attempts + corrections polluting context 🚫👶
/compact with a hint (/compact focus on the auth refactor, drop the test debugging) beats letting autocompact fire — the model is at its least intelligent point when auto-compacting due to context rot
use subagents for context management — ask yourself "will I need this tool output again, or just the conclusion?" — 20 file reads + 12 greps + 3 dead ends stay in the child's context, only the final report returns 🚫👶
■ Session Management (6)
Tip
Source
every turn is a branching point — after Claude ends a turn, pick between Continue, /rewind, /clear, /compact, or Subagent based on how much existing context you need to carry forward
new task = new session — related tasks (e.g. writing docs for what you just built) can reuse context for efficiency, but genuinely new tasks deserve a fresh session
use "summarize from here" before rewinding to have Claude write a handoff message — like a note to the previous iteration of Claude from its future self
/compact vs /clear — compact is lossy but momentum-friendly (mid-task, fuzzy details ok); /clear + brief is more work but you control exactly what carries forward (high-stakes next step)
use recaps for long-running sessions — short summaries of what Claude did and what's next, useful when returning after minutes or hours. Disable with recaps in /config
/rename important sessions (e.g. [TODO - refactor task]) and /resume them later — label each instance when running multiple Claudes simultaneously
memory.md, constitution.md does not guarantee anything
any developer should be able to launch Claude, say "run the tests" and it works on the first try — if it doesn't, your CLAUDE.md is missing essential setup/build/test commands
keep codebases clean and finish migrations — partially migrated frameworks confuse models that might pick the wrong pattern
use settings.json for harness-enforced behavior (attribution, permissions, model) — don't put "NEVER add Co-Authored-By" in CLAUDE.md when attribution.commit: "" is deterministic
Agents (4)
Tip
Source
have feature specific sub-agents (extra context) with skills (progressive disclosure) instead of general qa, backend engineer
say "use subagents" to throw more compute at a problem — offload tasks to keep your main context clean and focused 🚫👶
use slash commands for every "inner loop" workflow you do many times a day — saves repeated prompting, commands live in .claude/commands/ and are checked into git
if you do something more than once a day, turn it into a skill or command — build /techdebt, context-dump, or analytics commands
Skills (9)
Tip
Source
use context: fork to run a skill in an isolated subagent — main context only sees the final result, not intermediate tool calls. The agent field lets you set the subagent type
skills are folders, not files — use references/, scripts/, examples/ subdirectories for progressive disclosure
build a Gotchas section in every skill — highest-signal content, add Claude's failure points over time
skill description field is a trigger, not a summary — write it for the model ("when should I fire?")
don't state the obvious in skills — focus on what pushes Claude out of its default behavior 🚫👶
don't railroad Claude in skills — give goals and constraints, not prescriptive step-by-step instructions 🚫👶
include scripts and libraries in skills so Claude composes rather than reconstructs boilerplate
embed !command in SKILL.md to inject dynamic shell output into the prompt — Claude runs it on invocation and the model only sees the result
■ Hooks (5)
Tip
Source
use on-demand hooks in skills — /careful blocks destructive commands, /freeze blocks edits outside a directory
measure skill usage with a PreToolUse hook to find popular or undertriggering skills
use a PostToolUse hook to auto-format code — Claude generates well-formatted code, the hook handles the last 10% to avoid CI failures
route permission requests to Opus via a hook — let it scan for attacks and auto-approve safe ones 🚫👶
use a Stop hook to nudge Claude to keep going or verify its work at the end of a turn
■ Workflows (6)
Tip
Source
vanilla cc is better than any workflows with smaller tasks
use /model to select model and reasoning, /context to see context usage, /usage to check plan limits, /extra-usage to configure overflow billing, /config to configure settings — use Opus for plan mode and Sonnet for code to get the best of both
always use thinking mode true (to see reasoning) and Output Style Explanatory (to see detailed output with ★ Insight boxes) in /config for better understanding of Claude's decisions
/focus mode hides all intermediate work and shows only the final result — trust the model to run the right commands and just look at the outcome (toggle with /focus)
tune effort level with Opus 4.7's adaptive thinking — low for speed and fewer tokens, max for most intelligence (slider: low · medium · high · xhigh · max)
■ Workflows Advanced (9)
Tip
Source
use ASCII diagrams a lot to understand your architecture
use /loop for local recurring monitoring (up to 7 days) · use /schedule for cloud-based recurring tasks that run even when your machine is off
/permissions with wildcard syntax (Bash(npm run *), Edit(/docs/**)) instead of dangerously-skip-permissions
/sandbox to reduce permission prompts with file and network isolation — 84% reduction internally
invest in product verification skills (signup-flow-driver, checkout-verifier) — worth spending a week to perfect
use auto mode instead of dangerously-skip-permissions — a model-based classifier decides if each command is safe and auto-approves, pauses and asks if risky. Shift+Tab to cycle Ask → Plan → Auto modes 🚫👶
use /less-permission-prompts skill to scan session history for safe bash/MCP commands that repeatedly prompt, then get a recommended allowlist to paste into settings
build a /go skill that (1) tests end-to-end via bash/browser/computer use (2) runs /simplify (3) puts up a PR — so when you come back, you know the code works 🚫👶
■ Git / PR (5)
Tip
Source
keep PRs small and focused — p50 of 118 lines (141 PRs, 45K lines changed in a day), one feature per PR, easier to review and revert
always squash merge PRs — clean linear history, one commit per feature, easy git revert and git bisect
commit often — try to commit at least once per hour, as soon as task is completed, commit
tag @claude on a coworker's PR to auto-generate lint rules for recurring review feedback — automate yourself out of code review 🚫👶
use /code-review for multi-agent PR analysis — catches bugs, security vulnerabilities, and regressions before merge
■ Debugging (7)
Tip
Source
make it a habit to take screenshots and share with Claude whenever you are stuck with any issue
What exactly should you put inside your CLAUDE.md — and what should you leave out?
If you already have a CLAUDE.md, is a separate constitution.md or rules.md actually needed?
How often should you update your CLAUDE.md, and how do you know when it's become stale?
Why does Claude still ignore CLAUDE.md instructions — even when they say MUST in all caps? (reddit)
Agents, Skills & Workflows (6)
When should you use a command vs an agent vs a skill — and when is vanilla Claude Code just better?
How often should you update your agents, commands, and workflows as models improve?
Should you have a generalist subagent or a feature-specific/role-specific agent? Does giving your subagent a detailed persona improve quality, and what does a "perfect persona prompt" for research/vision look like?
Should you rely on Claude Code's built-in plan mode — or build your own planning command/agent that enforces your team's workflow?
If you have a personal skill (e.g., /implement with your coding style), how do you incorporate community skills (e.g., /simplify) without conflicts — and who wins when they disagree?
Are we there yet? Can we convert an existing codebase into specs, delete the code, and have AI regenerate the exact same code from those specs alone?
Specs & Documentation (3)
Should every feature in your repo have a spec as a markdown file?
How often do you need to update specs so they don't become obsolete when a new feature is implemented?
When implementing a new feature, how do you handle the ripple effect on specs for other features?
REPORTS
1. Read the repo like a course, learn what commands, agents, skills, and hooks are before trying to use them.
2. Clone this repo and play with the examples, try /weather-orchestrator, listen to the hook sounds, run agent teams, so you can see how things actually work.
3. Go to your own project and ask Claude to suggest what best practices from this repo you should add, give it this repo as a reference so it knows what's possible.