Track AI Code all the way to production

An open-source Git extension for tracking AI code through the entire SDLC.

Tracking Code by

Cursor
Claude Code
GitHub Copilot
Gemini
OpenCode
RovoDev
$  git commit -a -m "Any commit"
[main a731df7] checkpoint
1 file changed, 70 insertions(+), 10 deletions(-)
you
ai
57%
43%
80% AI accepted | 4min wait

No workflow changes

Just Install and Commit

Build as usual. Prompt, edit, commit. Git AI tracks the AI tool, model and prompt that generated every line of AI-code that enters your codebase.

Install (Mac, Linux, Windows)

How it works

Coding Agents call the Git AI CLI to mark the lines they generated. On Commit, the AI-attributions get saved into a Git Note and are accurately tracked through rebases, merges, cherry-picks, etc by Git AI.

Git Note (refs/notes/ai #<commitsha>)AI Blame
hooks/post_clone_hook.rs
  promptid1 6-8
  promptid2 16,21,25
---
{
"prompts": {
  "promptid1": {
    "agent_id": {
      "tool": "copilot",
      "model": "Codex 5.2"
    },
    "human_author": "Alice Person",
    "summary": "Reported on GitHub #821: Git AI tries fetching authorship notes for interrupted (CTRL-C) clones. Fix: gaurd note fetching on successful clone.",
    ...
  },
  "promptid2": {
    "agent_id": {
      "tool": "cursor",
      "model": "Sonnet 4.5"
    },
    "human_author": "Jeff Coder",
    "summary": "Match the style of Git Clone's output to report success or failure of the notes fetch operation.",
    ...
  }
}
}
1
pub fn post_clone_hook(
2
parsed_args: &ParsedGitInvocation,
3
exit_status: std::process::ExitStatus,
4
) -> Option<()> {
5
6
if !exit_status.success() {
7
return None;
8
}
9
10
let target_dir =
11
extract_clone_target_directory(&parsed_args.command_args)?;
12
13
let repository =
14
find_repository_in_path(&target_dir).ok()?;
15
16
print!("Fetching authorship notes from origin");
17
18
match fetch_authorship_notes(&repository, "origin") {
19
Ok(()) => {
20
debug_log("successfully fetched authorship notes from origin");
21
print!(", done.\n");
22
}
23
Err(e) => {
24
debug_log(&format!("authorship fetch from origin failed: {}", e));
25
print!(", failed.\n");
26
}
27
}
28
29
Some(())
30
}

Git AI maintains the open standard for tracking AI authorship in Git Notes. Learn more on GitHub

Why git AI?
[*]

Multi-agentThe world is and will continue to be multi-agent. Git AI is vendor-agnostic and open.

[*]

Own your dataGit AI collects data from every Coding Agent and lets you own your own AI-usage and prompt data.

[*]

"Detecting" AI-code is an anti-pattern.Git AI doesn't guess if a hunk is AI-generated. The Coding Agents that support our standard tell Git AI exactly which lines they generated resulting in the most accurate AI-attribution possible.

[*]

Git Native & Open standardGit AI built the open standard for tracking AI-generated code with Git Notes.

[*]

Local-firstWorks offline, no Anthropic key required.

AI Blame

Codebases are growing faster than ever, but massive AI-generated codebases are challenging to maintain. Git AI links each line of code to the prompt that generated it, helping you answer the question "why is this like that?" and giving Agents more context about what your code is trying to do.

AI Blame Command
$  git-ai blame src/main.rs
a1b2c3d
John
1
use std::io;
a1b2c3d
John
2
 
e4f5g6h
Jane
3
fn main() {
e4f5g6h
Jane
4
    let mut input = String::new();
e4f5g6h
AI
5
    match io::stdin().read_line(&mut input) {
e4f5g6h
AI
6
        Ok(_) => println!("You entered: {}", input.trim()),
e4f5g6h
AI
7
        Err(e) => eprintln!("Error: {}", e),
e4f5g6h
AI
8
    }
e4f5g6h
Jane
9
}

AI in Git blame

IDE Plugins
commands.rs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
@Claude make sure our CLI can accept stdin on any platform.

Personal Prompt Analysis

Git AI knows which prompts led nowhere, which AI-aided blocks were changed during code review, and which AI-code ended up becoming a durable part of your codebase — real signals about what practices work that every eng can learn from.

Metric
3 mo ago
Now
Parallel Agents
1.1
4
Agents busy for (daily average)
3 hrs
11 hrs
Generated vs Shipped LoC
110:1
20:1
Metric
With Plan Mode
Without
Accepted Rate
87%
61%
Durability
8.2 weeks
4.5 weeks
Avg Scope
~391 lines
~142 lines
SM
Sarah M.Background Agent Runner
└─

Launches Codex background agents each night, spends days editing and reviewing manually

DK
Dave K.Plan Mode Power User
└─

All-in on Plan Mode — designs upfront, then lets AI implement in one shot

JP
Jordan P.Rapid Iterator
└─

Pair-programs with Claude in chat, iterates rapidly on small changes

@mentioning specific functions
Leads to DRYer code
Including test examples
2.3x higher acceptance rate
Describing edge cases upfront
Fewer code review changes
Category
Lines
Accepted
🐛Bug
412
41:1
Feature
1,847
18:1
🔧Enhancement
623
10:1
♻️Refactor
891
25:1

Get Started