Skip to content

Comments

[TRTLLM-1543][feat] Account for reusable KV cache blocks in capacity …#11490

Merged
SimengLiu-nv merged 2 commits intoNVIDIA:mainfrom
SimengLiu-nv:JIRA-1543
Feb 19, 2026
Merged

[TRTLLM-1543][feat] Account for reusable KV cache blocks in capacity …#11490
SimengLiu-nv merged 2 commits intoNVIDIA:mainfrom
SimengLiu-nv:JIRA-1543

Conversation

@SimengLiu-nv
Copy link
Collaborator

@SimengLiu-nv SimengLiu-nv commented Feb 13, 2026

…scheduling

The CapacityScheduler was over-estimating block requirements for requests with shared prefixes, causing unnecessary scheduling delays.

Changes:

  • Add countReusableBlocks() to count cached blocks from the radix tree
  • Modify getNeededBlocksOneStep() to subtract reusable blocks from estimates
  • Modify getRemainingBlocksToCompletion() to account for reusable context blocks, only when sequence hasn't been added yet
  • Add unit tests for reuse-aware capacity scheduling

Summary by CodeRabbit

  • New Features

    • Improved KV cache optimization to enable reuse across requests with shared context, allowing better resource utilization and increased batch capacity.
  • Tests

    • Expanded test coverage for KV cache reuse accounting and batch scheduling scenarios.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 13, 2026

📝 Walkthrough

Walkthrough

Introduces block reuse counting functionality to KV cache management classes by adding countReusableBlocks() methods. Integrates reuse accounting into block allocation calculations with conditional guards ensuring applicability only when block reuse is enabled and variable window attention is not active.

Changes

Cohort / File(s) Summary
KV Cache Manager Headers
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h
Added countReusableBlocks() method declarations to WindowBlockManager, BlockManager, BaseKVCacheManager (virtual), and KVCacheManager (override) to compute reusable blocks for given tokens and requests.
KV Cache Manager Implementation
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
Implemented countReusableBlocks() in BlockManager, WindowBlockManager, and KVCacheManager with block traversal logic and debug logging. Integrated reuse accounting into getNeededBlocksOneStep() and getRemainingBlocksToCompletion() with contextual guards for block reuse and non-variable-window-attention scenarios.
Reuse-Aware Scheduling Tests
cpp/tests/unit_tests/batch_manager/capacitySchedulerTest.cpp
Added test cases validating reuse-aware capacity scheduling: ReuseAwareSchedulingAllowsMoreRequestsWithSharedPrefix, ReuseAwareSchedulingWithPartialPrefixMatch, NoReuseWithDifferentPrompts, and ReuseAwareSchedulingMaxUtilizationPolicy.
KV Cache Manager Unit Tests
cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp
Added seven unit tests for reuse accounting covering scenarios: no match returns zero, partial matches, full reuse, disabled reuse, remaining block calculations, one-step block needs, and multiple requests with shared prefixes.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Scheduler/Client
    participant KVM as KVCacheManager
    participant BM as BlockManager
    participant WBM as WindowBlockManager
    
    Client->>KVM: getNeededBlocksOneStep(request)
    activate KVM
    KVM->>BM: countReusableBlocks(tokens, request)
    activate BM
    BM->>WBM: traverse blocks & count matches
    activate WBM
    WBM-->>BM: reusable block count
    deactivate WBM
    BM-->>KVM: reusable blocks
    deactivate BM
    KVM->>KVM: subtract reusable from shared blocks<br/>(if reuse enabled & not var window)
    KVM-->>Client: adjusted block needs
    deactivate KVM
    
    Client->>KVM: getRemainingBlocksToCompletion(request)
    activate KVM
    KVM->>BM: countReusableBlocks(tokens, request)
    activate BM
    BM-->>KVM: reusable context blocks
    deactivate BM
    KVM->>KVM: calculate effective context blocks<br/>(if reuse enabled & applicable)
    KVM-->>Client: adjusted remaining blocks
    deactivate KVM
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive The description provides a concise explanation of the problem and lists the key changes, but lacks details on the motivation, design rationale, and specific test coverage references. Expand the Description section to explain why reuse accounting was needed and how it improves scheduling. Add specific test case names to the Test Coverage section.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: accounting for reusable KV cache blocks in capacity scheduling to address over-estimation issues.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into main

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35842 [ run ] triggered by Bot. Commit: af44ec0

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h (1)

1-15: ⚠️ Potential issue | 🟠 Major

Update the copyright year to reflect 2026 modifications.

This file was modified in 2026 but still lists 2022–2024. Please bump the year.

✏️ Suggested update
- * Copyright (c) 2022-2024, NVIDIA CORPORATION.  All rights reserved.
+ * Copyright (c) 2022-2026, NVIDIA CORPORATION.  All rights reserved.

As per coding guidelines "All source files must contain an NVIDIA copyright header with the year of latest meaningful modification. Use the Apache License 2.0 format. This applies to .cpp, .h, .cu, .py, and other compiled or interpreted source files."

cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp (1)

1-16: ⚠️ Potential issue | 🟠 Major

Update the copyright year to reflect 2026 modifications.

This file was modified in 2026 but still lists 2025. Please bump the year.

✏️ Suggested update
- * SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+ * SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

As per coding guidelines "All source files must contain an NVIDIA copyright header with the year of latest meaningful modification. Use the Apache License 2.0 format. This applies to .cpp, .h, .cu, .py, and other compiled or interpreted source files."

cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp (1)

1-3: ⚠️ Potential issue | 🟡 Minor

Update the copyright year to 2026.

This file was modified in 2026, but the SPDX header still ends at 2025.

🛠️ Suggested update
- * SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+ * SPDX-FileCopyrightText: Copyright (c) 2023-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

As per coding guidelines, “All source files must contain an NVIDIA copyright header with the year of latest meaningful modification. Use the Apache License 2.0 format.”

cpp/tests/unit_tests/batch_manager/capacitySchedulerTest.cpp (1)

2-2: ⚠️ Potential issue | 🟡 Minor

Update copyright year to 2026.

New test cases are being added in 2026, so the copyright header should reflect this. As per coding guidelines, "All source files must contain an NVIDIA copyright header with the year of latest meaningful modification."

- * SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+ * SPDX-FileCopyrightText: Copyright (c) 2023-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
🤖 Fix all issues with AI agents
In `@cpp/tests/unit_tests/batch_manager/capacitySchedulerTest.cpp`:
- Around line 2029-2034: This test captures numIterations from runTest but
doesn't assert it; add an assertion that numIterations equals the expected
number of iterations (use the existing expectedStates to derive that), e.g. add
EXPECT_EQ(numIterations, expectedStates.size()) after the runTest call to mirror
other tests (referencing numIterations, runTest, and expectedStates).

In `@cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp`:
- Around line 5824-5863: In TEST
KVCacheManagerReuseAccountingTest::CountReusableBlocksNoMatchReturnsZero,
replace the magic literal 42 used when initializing uniqueTokens with a named
constexpr (e.g., constexpr TokenIdType kTokenSeed = 42) and use that constant in
the std::vector construction for uniqueTokens so the token seed is not a raw
literal; update any nearby comments or variable names if helpful.
- Around line 5753-5822: The test KVCacheManagerReuseAccountingTest uses a magic
literal 7 when constructing baseTokens; introduce a named constexpr (e.g.,
constexpr TokenIdType kTokenSeed = 7) and replace the literal in the baseTokens
initialization with that constant to follow the guideline; update the symbol
near the test body where baseTokens is created (the vector initialization in
KVCacheManagerReuseAccountingTest) so the seeded token value is defined once and
used for clarity and maintainability.
- Around line 6105-6183: The test KVCacheManagerReuseAccountingTest uses magic
literals 1000 and 2000 when building tokens0 and tokens1; replace these with
named constexprs (e.g., constexpr TokenIdType kUniqueSuffixBase0 = 1000;
constexpr TokenIdType kUniqueSuffixBase1 = 2000;) declared near the top of the
test, then use those constants when pushing suffix tokens into tokens0 and
tokens1 so the literals are no longer hard-coded in the loops that build the
unique suffixes.
- Around line 5865-5931: Extract the magic literal 999 into a named constexpr
(e.g. constexpr TokenIdType kDivergentToken = 999;) and use it when filling the
tail of partialMatchTokens (replace the std::fill(...) call that sets 999 with
kDivergentToken); ensure the constant uses the TokenIdType type and is declared
near the test setup so it’s obvious (refer to partialMatchTokens and req1 in
this test, and to the std::fill(...) that currently writes 999).
🧹 Nitpick comments (1)
cpp/tests/unit_tests/batch_manager/capacitySchedulerTest.cpp (1)

1913-1914: Misleading initializer value 42 is immediately overwritten by std::iota.

The vector is initialized with fill value 42, then immediately overwritten. Use 0 or simply size-only construction for clarity. Same pattern at Line 2056-2057.

Suggested fix
-    auto inputTokens = std::make_shared<std::vector<int32_t>>(promptLen, 42);
+    auto inputTokens = std::make_shared<std::vector<int32_t>>(promptLen);
     std::iota(inputTokens->begin(), inputTokens->end(), 0);

And similarly at Line 2056:

-    auto inputTokens = std::make_shared<std::vector<int32_t>>(promptLen, 42);
+    auto inputTokens = std::make_shared<std::vector<int32_t>>(promptLen);
     std::iota(inputTokens->begin(), inputTokens->end(), 0);

@SimengLiu-nv SimengLiu-nv requested a review from eopXD February 13, 2026 02:01
@tensorrt-cicd
Copy link
Collaborator

PR_Github #35842 [ run ] completed with state SUCCESS. Commit: af44ec0
/LLM/main/L0_MergeRequest_PR pipeline #27683 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@thorjohnsen
Copy link
Collaborator

I think that the way to extend this to SWA is to run capacityScheduler for each window size, but that extension can be left for a follow-up PR.

Copy link
Collaborator

@thorjohnsen thorjohnsen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks reasonable to me.

@thorjohnsen
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35953 [ run ] triggered by Bot. Commit: af44ec0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35953 [ run ] completed with state SUCCESS. Commit: af44ec0
/LLM/main/L0_MergeRequest_PR pipeline #27768 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

…scheduling

The CapacityScheduler was over-estimating block requirements for requests
with shared prefixes, causing unnecessary scheduling delays.

Changes:
- Add countReusableBlocks() to count cached blocks from the radix tree
- Modify getNeededBlocksOneStep() to subtract reusable blocks from estimates
- Modify getRemainingBlocksToCompletion() to account for reusable context
  blocks, only when sequence hasn't been added yet
- Add unit tests for reuse-aware capacity scheduling

Signed-off-by: SimengLiu-nv <simengl@nvidia.com>
Signed-off-by: SimengLiu-nv <simengl@nvidia.com>
@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36098 [ run ] triggered by Bot. Commit: 3671371

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36098 [ run ] completed with state SUCCESS. Commit: 3671371
/LLM/main/L0_MergeRequest_PR pipeline #27894 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36173 [ run ] triggered by Bot. Commit: 3671371 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36173 [ run ] completed with state SUCCESS. Commit: 3671371
/LLM/main/L0_MergeRequest_PR pipeline #27956 completed with status: 'SUCCESS'

Link to invocation

@SimengLiu-nv SimengLiu-nv merged commit 353fd33 into NVIDIA:main Feb 19, 2026
5 checks passed
SimengLiu-nv added a commit to SimengLiu-nv/TensorRT-LLM that referenced this pull request Feb 23, 2026
…tch scheduler capacity scheduling

  Follow up on NVIDIA#11490 which enable reuse across requests with shared context, allowing better resource utilization and increased batch capacity.

  This PR enables mirco batch scheduler to account for cached KV blocks from shared prefixes when making scheduling decisions. The capacity scheduler populates estimatedReusableTokens on requests (via radix tree lookup), and the micro batch scheduler subtracts these from the compute budget, allowing more requests to fit in.

Signed-off-by: SimengLiu-nv <simengl@nvidia.com>
SimengLiu-nv added a commit to SimengLiu-nv/TensorRT-LLM that referenced this pull request Feb 24, 2026
…tch scheduler capacity scheduling

  Follow up on NVIDIA#11490 which enable reuse across requests with shared context, allowing better resource utilization and increased batch capacity.

  This PR enables mirco batch scheduler to account for cached KV blocks from shared prefixes when making scheduling decisions. The capacity scheduler populates estimatedReusableTokens on requests (via radix tree lookup), and the micro batch scheduler subtracts these from the compute budget, allowing more requests to fit in.

Signed-off-by: SimengLiu-nv <simengl@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants