Skip to content

Comments

[TRTLLM-10487][feat] Add user-provided UUID support for multimodal KV cache identification.#11075

Merged
SimengLiu-nv merged 5 commits intoNVIDIA:mainfrom
SimengLiu-nv:custom-uuids
Feb 12, 2026
Merged

[TRTLLM-10487][feat] Add user-provided UUID support for multimodal KV cache identification.#11075
SimengLiu-nv merged 5 commits intoNVIDIA:mainfrom
SimengLiu-nv:custom-uuids

Conversation

@SimengLiu-nv
Copy link
Collaborator

@SimengLiu-nv SimengLiu-nv commented Jan 28, 2026

This commit enables users to provide custom UUID strings for multimodal inputs (images, videos, etc.) to achieve deterministic KV cache management across sessions. When UUIDs are provided via the multi_modal_uuids parameter, they serve as stable identifiers in place of content-based hashes, allowing for predictable cache lookups without reprocessing content.

Summary by CodeRabbit

Release Notes

  • New Features
    • Added multimodal UUID support for deterministic KV cache identification. Users can now provide optional identifiers for multimodal content items to enable stable cache management and tracking across inference requests. Supports partial UUID specification with automatic fallback to content hashing for items without UUIDs.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

cpp/tests/unit_tests/executor/serializeUtilsTest.cpp
tests/unittest/_torch/multimodal/test_mm_encoder_standalone.py
tests/unittest/bindings/test_executor_bindings.py
tests/unittest/llmapi/test_llm_kv_cache_events.py

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@SimengLiu-nv SimengLiu-nv requested review from a team as code owners January 28, 2026 23:01
@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33925 [ run ] triggered by Bot. Commit: 2091730

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 28, 2026

📝 Walkthrough

Walkthrough

This pull request introduces multimodal UUID support for deterministic cache identification across the TensorRT LLM codebase. It extends request payloads, serialization mechanisms, and the hashing pipeline to propagate optional per-item UUIDs from user input through C++ execution and Python bindings, enabling stable KV cache management for multimodal content.

Changes

Cohort / File(s) Summary
Core Executor & Request Types
cpp/include/tensorrt_llm/executor/executor.h, cpp/include/tensorrt_llm/batch_manager/llmRequest.h
Added optional multimodal_uuids field to MultimodalInput and LlmRequest. Restructured MmKey from a simple pair to a struct with hash, startOffset, and optional uuid fields, including constructors and equality operator.
Serialization Infrastructure
cpp/include/tensorrt_llm/executor/serialization.h, cpp/tensorrt_llm/executor/serialization.cpp, cpp/tensorrt_llm/executor/serializeUtils.h
Added public serialization methods for MmKey (serializedSize, serialize, deserializeMmKey). Extended deserialization dispatcher and static assertions to handle MmKey with UUID field. Updated MultimodalInput serialization to include optional UUIDs.
C++ Implementation
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, cpp/tensorrt_llm/executor/multimodalInput.cpp
Updated generateBlockHashExtraKeys to extract and propagate UUIDs from LlmRequest into MmKey construction. Modified BlockKeyHasher::hash to access hash and startOffset via struct members. Implemented getMultimodalUuids() getter and constructor initialization for MultimodalInput.
Python Bindings (Nanobind)
cpp/tensorrt_llm/nanobind/batch_manager/bindings.cpp, cpp/tensorrt_llm/nanobind/batch_manager/llmRequest.h, cpp/tensorrt_llm/nanobind/batch_manager/llmRequest.cpp, cpp/tensorrt_llm/nanobind/executor/bindings.cpp, cpp/tensorrt_llm/nanobind/executor/request.cpp
Extended LlmRequest constructor bindings to accept multimodal_uuids parameter. Updated mm_keys binding to return 3-tuples including optional UUID. Modified MultimodalInput binding to expose multimodal_uuids property and updated __getstate__/__setstate__ to handle 4-element state tuples.
Python Executor Layer
tensorrt_llm/executor/base_worker.py, tensorrt_llm/_torch/pyexecutor/_util.py, tensorrt_llm/_torch/pyexecutor/llm_request.py
Threaded multimodal_uuids through MultimodalInput construction and LlmRequest initialization. Updated dummy multimodal context creation to include UUID field.
Python Input API
tensorrt_llm/inputs/data.py, tensorrt_llm/inputs/multimodal.py, tensorrt_llm/inputs/registry.py
Added multi_modal_uuids field to TextPrompt and TokensPrompt TypedDicts. Introduced uuid_to_hash() and int32_to_hexdigest() utility functions. Extended apply_mm_hashes() signature to accept optional per-modality UUIDs and return flattened UUID list. Updated MultimodalInput.from_components() to propagate UUIDs.
Utilities
tensorrt_llm/_utils.py
Updated _mm_key_to_json() to handle both 2-tuple (backward-compatible) and 3-tuple formats with optional UUID, using UUID as hash identifier when present.
Documentation
docs/source/features/kvcache.md
Added "Multimodal UUID Support for Cache Identification" section describing UUID-based cache salting, UUID format rules (≤32 bytes zero-padded hex, >32 bytes BLAKE3 hash), and usage examples.
C++ Unit Tests
cpp/tests/unit_tests/executor/serializeUtilsTest.cpp
Extended existing BlockKey tests with UUID nullopt parameter. Added comprehensive test suites for MmKeyWithUuid, BlockKeyWithExtrasAndUuids, and MultimodalInputWithUuids covering serialization round-trips and edge cases.
Python Unit Tests
tests/unittest/bindings/test_executor_bindings.py, tests/unittest/llmapi/test_llm_kv_cache_events.py, tests/unittest/_torch/multimodal/test_mm_encoder_standalone.py
Added tests for MultimodalInput pickling with UUIDs, UUID-to-hash conversion edge cases, apply_mm_hashes with UUID inputs, KV cache event validation with UUIDs, and end-to-end multimodal request flows with various UUID scenarios (partial, mixed media types, long UUIDs).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 51.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive The description explains the feature (UUID support for deterministic KV cache management) but lacks detailed sections on the solution, affected components, and architectural changes for such a significant feature. Expand the description to include: affected components (batch_manager, executor, serialization, Python bindings), behavioral changes, and how UUID-to-hash conversion works for long UUIDs.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly identifies the main change: adding user-provided UUID support for multimodal KV cache identification, directly addressing the PR's primary objective.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (11)
tensorrt_llm/_torch/pyexecutor/_util.py (1)

1-1: Add the NVIDIA copyright header with the latest modification year (2026).
This file now changes in 2026 but has no NVIDIA header at all. Please add the standard header.

As per coding guidelines, all TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification.

cpp/include/tensorrt_llm/executor/serialization.h (1)

1-2: Update the NVIDIA copyright year to reflect 2026 changes.

As per coding guidelines, all TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification.

cpp/tensorrt_llm/nanobind/executor/bindings.cpp (1)

1-3: Update the NVIDIA copyright year to include 2026.

As per coding guidelines, all TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification.

cpp/tensorrt_llm/nanobind/batch_manager/llmRequest.cpp (1)

1-3: Update the NVIDIA copyright year to include 2026.

As per coding guidelines, all TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification.

tests/unittest/bindings/test_executor_bindings.py (1)

1-1: Add the NVIDIA copyright header with the latest modification year (2026).
This test file is missing the required NVIDIA header.

As per coding guidelines, all TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification.

tensorrt_llm/_utils.py (1)

1-1: Update the NVIDIA copyright year to include 2026.

As per coding guidelines, all TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification.

tensorrt_llm/inputs/registry.py (1)

662-671: Fix docstring typo (“multinmodal” → “multimodal”).

✏️ Suggested fix
-        Process the multinmodal hashing for media tokens if possible.
+        Process the multimodal hashing for media tokens if possible.
cpp/include/tensorrt_llm/executor/executor.h (1)

1-3: Update the copyright year to reflect the latest modification (2026).

The header still ends at 2024, but this file is now modified in 2026. Please update the year range accordingly.

📝 Suggested update
- * Copyright (c) 2022-2024, NVIDIA CORPORATION.  All rights reserved.
+ * Copyright (c) 2022-2026, NVIDIA CORPORATION.  All rights reserved.
tensorrt_llm/inputs/multimodal.py (2)

1-1: Add the NVIDIA Apache 2.0 copyright header (latest year 2026).

This file is missing the required TensorRT‑LLM copyright header. Please add the standard NVIDIA header at the top.


584-692: Guard against non‑string UUIDs before hashing.

uuid_to_hash assumes a string; invalid types currently raise an AttributeError. Add an explicit type check for clearer errors.

🛠️ Suggested fix
         for i, item in enumerate(items):
             uuid = modality_uuids[i] if modality_uuids else None
             if uuid is not None:
+                if not isinstance(uuid, str):
+                    raise TypeError(
+                        f"UUID for modality '{modality}' at index {i} must be a string or None, got {type(uuid)}"
+                    )
                 # Use UUID-based hash
                 hashes.append(uuid_to_hash(uuid, hash_lib))
                 all_uuids.append(uuid)  # Store original UUID
tests/unittest/_torch/multimodal/test_mm_encoder_standalone.py (1)

1-7: Add the NVIDIA Apache 2.0 copyright header (latest year 2026).

This test module is missing the required NVIDIA header. Please add the standard TensorRT‑LLM header at the top.

🤖 Fix all issues with AI agents
In `@cpp/include/tensorrt_llm/executor/executor.h`:
- Around line 52-73: The header defines MmKey which uses std::array<uint8_t, 32>
but does not include <array>, so make the header self-contained by adding an
`#include` <array> near the other standard includes in executor.h; ensure the
include is placed before the MmKey definition so the std::array type is
available when parsing the struct (refer to MmKey, hash, and executor.h to
locate where to add the include).

In `@cpp/tests/unit_tests/executor/serializeUtilsTest.cpp`:
- Around line 1149-1151: The initializer for extraKeys is too long; split the
std::vector<MmKey> extraKeys initializer into multiple lines so each MmKey
element is on its own line (or wraps the inner std::string/std::nullopt entries)
to keep lines under 120 characters; update the line creating extraKeys
(references: MmKey, extraKeys, h1, h2, h3, SizeType32) so each element is
clearly separated and fits within the limit.
- Around line 1129-1131: The long UUID literal in the MmKey construction
(variable keyLongUuid passed to testSerializeDeserialize) exceeds the 120-char
line limit; break the string across lines so the source line stays under 120
chars (e.g., use a named const std::string uuid split across literals or
adjacent string literal concatenation and then pass that uuid into MmKey{hash,
SizeType32{255}, uuid}). Ensure the identifier names MmKey, keyLongUuid, and
testSerializeDeserialize are unchanged.
- Around line 1236-1239: The long UUID literal in the test vector longUuids in
serializeUtilsTest.cpp exceeds the 120-character line length; split the long
string into multiple shorter adjacent string literals or concatenate smaller
string parts to keep each source line under 120 chars (ensure the resulting
std::string value remains identical) and update the initializer for longUuids
accordingly so the test semantics (the long UUID value and the "short" entry)
are unchanged.

In `@tests/unittest/llmapi/test_llm_kv_cache_events.py`:
- Around line 425-440: The pytest.raises call in
test_apply_mm_hashes_uuid_length_mismatch uses a normal string for the regex
match which triggers RUF043; change the match argument to a raw string literal
(prefix with r) so the regex metacharacters like .* are interpreted correctly,
i.e., update the pytest.raises(..., match="UUID list length.*doesn't match.*data
items") to use a raw string r"UUID list length.*doesn't match.*data items" in
the test_apply_mm_hashes_uuid_length_mismatch that calls apply_mm_hashes.
🧹 Nitpick comments (2)
cpp/include/tensorrt_llm/executor/serialization.h (1)

352-355: Use Doxygen //! for the new MmKey section in the header.
Please convert the new section comment to Doxygen style (and add brief descriptions if needed) to match header documentation rules.

As per coding guidelines, follow Doxygen rules for documenting new C++ class interfaces and function prototypes.

cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp (1)

148-156: Consider declaring uuid as const after conditional assignment.

The variable uuid is assigned once and never modified afterward. While the current code is correct, a slightly cleaner pattern would use a ternary or immediately-invoked lambda to enable const:

♻️ Optional refactor suggestion
-            std::optional<std::string> uuid = std::nullopt;
-            if (multimodalUuids && *multimodalUuids && i < (*multimodalUuids)->size())
-            {
-                uuid = (*(*multimodalUuids))[i];
-            }
+            auto const uuid = [&]() -> std::optional<std::string>
+            {
+                if (multimodalUuids && *multimodalUuids && i < (*multimodalUuids)->size())
+                {
+                    return (*(*multimodalUuids))[i];
+                }
+                return std::nullopt;
+            }();

@chang-l chang-l requested review from 2ez4bz and pcastonguay January 29, 2026 06:20
@tensorrt-cicd
Copy link
Collaborator

PR_Github #33925 [ run ] completed with state SUCCESS. Commit: 2091730
/LLM/main/L0_MergeRequest_PR pipeline #26165 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Copy link
Collaborator

@eopXD eopXD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good. Do we guarantee UUID to be unique? How do we deal with collision.

Copy link
Member

@kaiyux kaiyux left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving on behalf of trt-llm-doc-owners

Copy link
Collaborator

@2ez4bz 2ez4bz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tburt-nv
Copy link
Collaborator

tburt-nv commented Feb 2, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34508 [ run ] triggered by Bot. Commit: 3306c17

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35028 [ run ] completed with state SUCCESS. Commit: db094a6
/LLM/main/L0_MergeRequest_PR pipeline #27031 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35133 [ run ] triggered by Bot. Commit: b857e07

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35133 [ run ] completed with state SUCCESS. Commit: b857e07
/LLM/main/L0_MergeRequest_PR pipeline #27126 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35175 [ run ] triggered by Bot. Commit: 8d38f01

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35175 [ run ] completed with state SUCCESS. Commit: 8d38f01
/LLM/main/L0_MergeRequest_PR pipeline #27164 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@SimengLiu-nv
Copy link
Collaborator Author

/bot skip --comment "CI failures are all known bugs: 5880261, 5879614 and 5863877."

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35545 [ skip ] triggered by Bot. Commit: 5ad0359

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35545 [ skip ] completed with state SUCCESS. Commit: 5ad0359
Release Check Pipeline #3179 failed

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35551 [ run ] triggered by Bot. Commit: 80a8e99

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35551 [ run ] completed with state SUCCESS. Commit: 80a8e99
/LLM/main/L0_MergeRequest_PR pipeline #27455 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

… cache identification

This commit enables users to provide custom UUID strings for multimodal inputs (images, videos, etc.) to achieve deterministic KV cache management across sessions. When UUIDs are provided via the `multi_modal_uuids` parameter, they serve as stable identifiers in place of content-based hashes, allowing for predictable cache lookups without reprocessing content.

Signed-off-by: SimengLiu-nv <simengl@nvidia.com>
Signed-off-by: SimengLiu-nv <simengl@nvidia.com>
Signed-off-by: SimengLiu-nv <simengl@nvidia.com>
…tent and other changes.

Signed-off-by: SimengLiu-nv <simengl@nvidia.com>
Signed-off-by: SimengLiu-nv <simengl@nvidia.com>
@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35657 [ run ] triggered by Bot. Commit: 63d27d5

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35657 [ run ] completed with state SUCCESS. Commit: 63d27d5
/LLM/main/L0_MergeRequest_PR pipeline #27535 completed with status: 'SUCCESS'

@SimengLiu-nv SimengLiu-nv merged commit 1208553 into NVIDIA:main Feb 12, 2026
5 checks passed
@2ez4bz
Copy link
Collaborator

2ez4bz commented Feb 12, 2026

first try

ekou24 pushed a commit to ekou24/TensorRT-LLM that referenced this pull request Feb 16, 2026
… cache identification. (NVIDIA#11075)

Signed-off-by: SimengLiu-nv <simengl@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants