Skip to content

Comments

[None][feat] Use new index api, add block scale support, fix max_seq_len esitmation, add flash mla support#11334

Merged
yizhang-nv merged 5 commits intoNVIDIA:mainfrom
yizhang-nv:kv-cache-manager-v2-idx-scale
Feb 15, 2026
Merged

[None][feat] Use new index api, add block scale support, fix max_seq_len esitmation, add flash mla support#11334
yizhang-nv merged 5 commits intoNVIDIA:mainfrom
yizhang-nv:kv-cache-manager-v2-idx-scale

Conversation

@yizhang-nv
Copy link
Member

@yizhang-nv yizhang-nv commented Feb 6, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for block-scaling KV cache quantization format (NVFP4).
    • Added method to retrieve per-sequence KV cache block IDs.
  • Improvements

    • Enhanced KV cache management with explicit model sequence length constraints for better memory utilization.
    • Refactored KV cache offset computation logic for improved performance.
  • Tests

    • Removed architecture-specific test skip conditions for improved test coverage.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@yizhang-nv yizhang-nv requested review from a team as code owners February 6, 2026 05:53
@yizhang-nv yizhang-nv force-pushed the kv-cache-manager-v2-idx-scale branch from 7ea1b99 to e43e829 Compare February 6, 2026 05:54
@yizhang-nv yizhang-nv changed the title [None][feat] Use new index api, add block scale support, fix max_seq_len esitmation [None][feat] Use new index api, add block scale support, fix max_seq_len esitmation, add flash mla support Feb 6, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 6, 2026

📝 Walkthrough

Walkthrough

The PR refactors KV cache offset computation by removing a conditional branching mechanism and replacing it with parameterized scaling via new indexScales and kvOffset tensors passed through the kernel signature. Concurrently, it introduces a model_max_seq_len parameter to sequence length clamping logic, renames quantization-related enum roles to scale variants for NVFP4 support, adds a new get_block_ids_per_seq method to KV cache managers, and removes architecture-specific test skip logic.

Changes

Cohort / File(s) Summary
CUDA Kernel and C++ Binding Refactoring
cpp/tensorrt_llm/batch_manager/kvCacheManagerV2Utils.cu, cpp/tensorrt_llm/batch_manager/kvCacheManagerV2Utils.h, cpp/tensorrt_llm/nanobind/batch_manager/kvCacheManagerV2Utils.cpp
Removed conditional COPY_V_IDX branching; kernel now always computes destination offsets by scaling source values with indexScales and applying kvOffset. Host launcher signature changed to accept indexScales and kvOffset tensors instead of copyVIdx boolean flag. Python binding updated to pass new tensor parameters and unwrap them from PyTorch types.
Resource Manager API Updates
tensorrt_llm/_torch/pyexecutor/resource_manager.py
Renamed enum roles KEY_BLOCK_QUANT and VALUE_BLOCK_QUANT to KEY_BLOCK_SCALE and VALUE_BLOCK_SCALE for NVFP4 support. Extended get_num_available_tokens signatures to accept model_max_seq_len parameter. Added new get_block_ids_per_seq method returning per-sequence block ID tensors. Updated set_page_index_buf calls to set_base_page_index_buf for base-page alignment. Modified cache-byte calculations to recognize new block-scale roles.
Model Engine Warmup Integration
tensorrt_llm/_torch/pyexecutor/model_engine.py
Propagated model_max_seq_len=self.max_seq_len parameter through KV cache availability checks in multiple warmup paths (_general_warmup, _run_autotuner_warmup, _create_warmup_request, _create_cuda_graph_warmup_request), influencing available token capacity calculations.
KV Cache Manager Interface
tensorrt_llm/runtime/kv_cache_manager_v2/__init__.pyi, tensorrt_llm/runtime/kv_cache_manager_v2/_core/_kv_cache_manager.py
Extended clamp_max_seq_len_for_mem signature to accept model_max_seq_len parameter. Refactored search logic to derive upper bound from model_max_seq_len and cap final result to that bound, replacing exponential growth estimation.
Test Cleanup
tests/unittest/_torch/attention/test_attention_mla.py
Removed architecture-specific skip logic and getSMVersion dependency check for v2_kv_cache on non-Blackwell architectures, allowing test to run unconditionally.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 28.57% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning PR description is empty, containing only the template with no actual content in the Description, Test Coverage, or checklist fields. Provide a clear description explaining what changes are made (new index API, block scale support, max_seq_len fix, flash MLA support) and why. List specific test coverage. Verify all PR checklist items are completed.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main changes: using new index API, adding block scale support, fixing max_seq_len estimation, and adding flash MLA support, all of which are present in the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@yizhang-nv yizhang-nv force-pushed the kv-cache-manager-v2-idx-scale branch from e43e829 to 278254e Compare February 6, 2026 05:57
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
cpp/tensorrt_llm/batch_manager/kvCacheManagerV2Utils.h (1)

1-3: ⚠️ Potential issue | 🟡 Minor

Update copyright year to reflect 2026 changes.

🛠️ Suggested update
- * SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+ * SPDX-FileCopyrightText: Copyright (c) 2022-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

As per coding guidelines: "All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification".

cpp/tensorrt_llm/batch_manager/kvCacheManagerV2Utils.cu (1)

253-290: ⚠️ Potential issue | 🟠 Major

Validate indexScales/kvOffset shapes before kernel launch.

The kernel indexes both arrays by poolIdx; mismatched shapes can cause OOB reads and memory corruption. Add shape/length checks alongside existing tensor validations.

🛡️ Suggested shape validation
@@
-    auto const& srcShape = input.getShape();
-    auto const& dstShape = output.getShape();
-    auto const& copyIndexShape = copyIndex.getShape();
+    auto const& srcShape = input.getShape();
+    auto const& dstShape = output.getShape();
+    auto const& copyIndexShape = copyIndex.getShape();
+    auto const& indexScalesShape = indexScales.getShape();
+    auto const& kvOffsetShape = kvOffset.getShape();
@@
-    SizeType32 numBlocksPerSeq = srcShape.d[3];
-    SizeType32 numSeqs = copyIndexShape.d[0];
+    SizeType32 numBlocksPerSeq = srcShape.d[3];
+    SizeType32 numSeqs = copyIndexShape.d[0];
+    constexpr int32_t kExpectedVectorDim = 1;
+    TLLM_CHECK(indexScalesShape.nbDims == kExpectedVectorDim);
+    TLLM_CHECK(kvOffsetShape.nbDims == kExpectedVectorDim);
+    TLLM_CHECK_WITH_INFO(indexScalesShape.d[0] >= numPools,
+        "indexScales must have at least numPools=%d elements.", numPools);
+    TLLM_CHECK_WITH_INFO(kvOffsetShape.d[0] >= numPools,
+        "kvOffset must have at least numPools=%d elements.", numPools);
🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/pyexecutor/resource_manager.py`:
- Line 1574: Remove the stray debug print in KVCacheManagerV2.__init__: replace
the call to print(config.layers) with either nothing (delete it) or a structured
logger call such as logger.debug("layers=%s", config.layers) so stdout is not
polluted; update the KVCacheManagerV2 constructor to use the logger (or remove)
and ensure any required logger is imported/available in the class.
🧹 Nitpick comments (2)
tensorrt_llm/_torch/pyexecutor/resource_manager.py (2)

1839-1850: Consider vectorizing the per-element Python loop.

The list comprehension on lines 1842–1846 iterates per-element in Python, which can be slow for long sequences. A vectorized approach would be more efficient:

Proposed vectorized implementation
     def get_block_ids_per_seq(self, request_ids: List[int]) -> torch.Tensor:
         block_ids_per_seq = self.get_batch_cache_indices(request_ids)
         block_ids_per_seq_tensors = [
-            torch.tensor([
-                i // self.num_local_layers if i != BAD_PAGE_INDEX else i
-                for i in sublist
-            ],
-                         dtype=torch.int) for sublist in block_ids_per_seq
+            torch.where(
+                (t := torch.tensor(sublist, dtype=torch.int)) != BAD_PAGE_INDEX,
+                t // self.num_local_layers,
+                t,
+            ) for sublist in block_ids_per_seq
         ]
         padded_tensor = torch.nn.utils.rnn.pad_sequence(
             block_ids_per_seq_tensors, batch_first=True, padding_value=0)
         return padded_tensor

Based on learnings: "In files under tensorrt_llm/_torch/pyexecutor, avoid accessing torch.Tensor objects inside for-loops when iterating over requests. Convert batched tensors to Python lists beforehand using tensor.tolist(), and then iterate over those lists."


1546-1549: Consider extracting the "nvfp4" check to a local variable.

The string comparison kv_cache_config.dtype == "nvfp4" is repeated four times in __init__. Extracting it to a local boolean would improve readability and reduce the risk of typos.

Proposed refactor

Add near the top of __init__, after self.dtype = dtype:

is_nvfp4 = kv_cache_config.dtype == "nvfp4"

Then replace all four occurrences with is_nvfp4.

Also applies to: 1595-1595, 1612-1612, 1633-1633

@yizhang-nv yizhang-nv force-pushed the kv-cache-manager-v2-idx-scale branch from 278254e to 19bcac5 Compare February 6, 2026 06:32
@yizhang-nv
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35080 [ run ] triggered by Bot. Commit: 19bcac5

Copy link
Collaborator

@eopXD eopXD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me for the max_seq_len estimation

@yizhang-nv
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35089 [ run ] triggered by Bot. Commit: 05c7805

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35089 [ run ] completed with state SUCCESS. Commit: 05c7805
/LLM/main/L0_MergeRequest_PR pipeline #27084 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@yizhang-nv
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35174 [ run ] triggered by Bot. Commit: 05c7805

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35174 [ run ] completed with state SUCCESS. Commit: 05c7805
/LLM/main/L0_MergeRequest_PR pipeline #27163 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@yizhang-nv yizhang-nv force-pushed the kv-cache-manager-v2-idx-scale branch from 3ae9a75 to fbff29f Compare February 8, 2026 17:22
@yizhang-nv
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35241 [ run ] triggered by Bot. Commit: fbff29f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35241 [ run ] completed with state DISABLED
CI server is currently disabled for scheduled maintenance. Estimated completion time: 6 PM PST on 2/8.

@yizhang-nv yizhang-nv force-pushed the kv-cache-manager-v2-idx-scale branch from fbff29f to 3e8d0d6 Compare February 9, 2026 00:51
@yizhang-nv
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35254 [ run ] triggered by Bot. Commit: 3e8d0d6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35254 [ run ] completed with state DISABLED
CI server is currently disabled for scheduled maintenance. Estimated completion time: 6 PM PST on 2/8.

@yizhang-nv
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35274 [ run ] triggered by Bot. Commit: 3e8d0d6

@yizhang-nv yizhang-nv force-pushed the kv-cache-manager-v2-idx-scale branch from 59a1185 to dcda6be Compare February 9, 2026 08:35
@yizhang-nv
Copy link
Member Author

/bot run --disable-fail-fast

@yizhang-nv yizhang-nv force-pushed the kv-cache-manager-v2-idx-scale branch 2 times, most recently from c4048db to 4f90a75 Compare February 11, 2026 10:48
Signed-off-by: Yi Zhang <187001205+yizhang-nv@users.noreply.github.com>
@yizhang-nv yizhang-nv force-pushed the kv-cache-manager-v2-idx-scale branch from 4f90a75 to d9bd46f Compare February 11, 2026 10:50
@yizhang-nv
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35626 [ run ] triggered by Bot. Commit: d9bd46f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35626 [ run ] completed with state SUCCESS. Commit: d9bd46f
/LLM/main/L0_MergeRequest_PR pipeline #27517 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Signed-off-by: yizhang-nv <187001205+yizhang-nv@users.noreply.github.com>
@yizhang-nv
Copy link
Member Author

/bot run --disable-fail-fast

@yizhang-nv
Copy link
Member Author

/bot kill

@yizhang-nv
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35697 [ kill ] triggered by Bot. Commit: 72184bf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35697 [ kill ] completed with state SUCCESS. Commit: 72184bf
Successfully killed previous jobs for commit 72184bf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35698 [ run ] triggered by Bot. Commit: 72184bf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35701 [ run ] triggered by Bot. Commit: 72184bf

@yizhang-nv
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35755 [ run ] triggered by Bot. Commit: 72184bf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35755 [ run ] completed with state SUCCESS. Commit: 72184bf
/LLM/main/L0_MergeRequest_PR pipeline #27615 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@yizhang-nv
Copy link
Member Author

/bot run --disable-fail-fast

@yizhang-nv yizhang-nv enabled auto-merge (squash) February 13, 2026 02:16
@tensorrt-cicd
Copy link
Collaborator

PR_Github #35853 [ run ] triggered by Bot. Commit: 72184bf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35853 [ run ] completed with state SUCCESS. Commit: 72184bf
/LLM/main/L0_MergeRequest_PR pipeline #27691 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@yizhang-nv
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36000 [ run ] triggered by Bot. Commit: 72184bf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36000 [ run ] completed with state SUCCESS. Commit: 72184bf
/LLM/main/L0_MergeRequest_PR pipeline #27808 completed with status: 'SUCCESS'

@yizhang-nv yizhang-nv merged commit 361ff36 into NVIDIA:main Feb 15, 2026
5 checks passed
peihu-nv pushed a commit to peihu-nv/TensorRT-LLM that referenced this pull request Feb 19, 2026
…len esitmation, add flash mla support (NVIDIA#11334)

Signed-off-by: Yi Zhang <187001205+yizhang-nv@users.noreply.github.com>
Signed-off-by: yizhang-nv <187001205+yizhang-nv@users.noreply.github.com>
Signed-off-by: peihu-nv <259410613+peihu-nv@users.noreply.github.com>
@yizhang-nv yizhang-nv deleted the kv-cache-manager-v2-idx-scale branch February 23, 2026 02:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants