Track AI Code all the way to production
An open-source Git extension for tracking AI code through the entire SDLC.
Tracking Code by






$ git commit -a -m "Any commit"Commit normally with Git. No workflow changesNo workflow changes
Just Install and Commit
Build as usual. Prompt, edit, commit. Git AI tracks the AI tool, model and prompt that generated every line of AI-code that enters your codebase.
Install (Mac, Linux, Windows)How it works
Coding Agents call the Git AI CLI to mark the lines they generated. On Commit, the AI-attributions get saved into a Git Note and are accurately tracked through rebases, merges, cherry-picks, etc by Git AI.
hooks/post_clone_hook.rs promptid1 6-8 promptid2 16,21,25 --- { "prompts": { "promptid1": { "agent_id": { "tool": "copilot", "model": "Codex 5.2" }, "human_author": "Alice Person", "summary": "Reported on GitHub #821: Git AI tries fetching authorship notes for interrupted (CTRL-C) clones. Fix: gaurd note fetching on successful clone.", ... }, "promptid2": { "agent_id": { "tool": "cursor", "model": "Sonnet 4.5" }, "human_author": "Jeff Coder", "summary": "Match the style of Git Clone's output to report success or failure of the notes fetch operation.", ... } } }
Git AI maintains the open standard for tracking AI authorship in Git Notes. Learn more on GitHub
Multi-agent — The world is and will continue to be multi-agent. Git AI is vendor-agnostic and open.
Own your data — Git AI collects data from every Coding Agent and lets you own your own AI-usage and prompt data.
"Detecting" AI-code is an anti-pattern. — Git AI doesn't guess if a hunk is AI-generated. The Coding Agents that support our standard tell Git AI exactly which lines they generated resulting in the most accurate AI-attribution possible.
Git Native & Open standard — Git AI built the open standard for tracking AI-generated code with Git Notes.
Local-first — Works offline, no Anthropic key required.
AI Blame
Codebases are growing faster than ever, but massive AI-generated codebases are challenging to maintain. Git AI links each line of code to the prompt that generated it, helping you answer the question "why is this like that?" and giving Agents more context about what your code is trying to do.
$ git-ai blame src/main.rsuse std::io;
fn main() {let mut input = String::new();
match io::stdin().read_line(&mut input) { Ok(_) => println!("You entered: {}", input.trim()), Err(e) => eprintln!("Error: {}", e),}
}
AI Agents + Models show up in Git AI blameAI in Git blame
Personal Prompt Analysis
Git AI knows which prompts led nowhere, which AI-aided blocks were changed during code review, and which AI-code ended up becoming a durable part of your codebase — real signals about what practices work that every eng can learn from.
Launches Codex background agents each night, spends days editing and reviewing manually
All-in on Plan Mode — designs upfront, then lets AI implement in one shot
Pair-programs with Claude in chat, iterates rapidly on small changes
@mentioning specific functionsIncluding test examplesDescribing edge cases upfront