Skip to content

Comments

[None][fix] Reduce host overhead for unified nvfp4 gemm tuning path.#10503

Merged
hyukn merged 1 commit intoNVIDIA:mainfrom
hyukn:fix/fp4_gemm_tuning_host_overhead
Jan 14, 2026
Merged

[None][fix] Reduce host overhead for unified nvfp4 gemm tuning path.#10503
hyukn merged 1 commit intoNVIDIA:mainfrom
hyukn:fix/fp4_gemm_tuning_host_overhead

Conversation

@hyukn
Copy link
Collaborator

@hyukn hyukn commented Jan 7, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Added conditional support for CuteDSL-based backend optimization when CUTLASS DSL libraries are available.
  • Refactor

    • Updated tactic representation in NVFP4 GEMM operations. Tactic specifications now include explicit backend identifiers paired with sub-tactic selections for improved granularity.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@hyukn hyukn requested review from Wong4j and yuxianq January 7, 2026 12:41
@hyukn hyukn requested a review from a team as a code owner January 7, 2026 12:41
@hyukn hyukn requested a review from liji-nv January 7, 2026 12:41
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 7, 2026

📝 Walkthrough

Walkthrough

Refactored NVFP4GemmUnifiedRunner to represent tactics as tuples of (backend_name, sub_tactic) instead of simple integers. Updated get_valid_tactics() return type and forward() dispatch logic to work with the new tuple-based representation. Added conditional support for CuteDSL backend.

Changes

Cohort / File(s) Summary
NVFP4GemmUnifiedRunner Tactic Representation
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
Changed tactic representation from List[int] to List[Tuple[str, int]] in get_valid_tactics(). Tactics now surface as (backend_name, sub_tactic) tuples across CUDA Core, Cutlass, cuBLASLt, and CuteDSL backends. Updated forward() method to accept and dispatch composite Tuple[str, int] tactics instead of string comparisons. Added conditional import and explicit handling for CuteDSLNVFP4BlackwellLinear when available. Adjusted fallback resolution and error messaging for new tuple structure.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ❌ 3
❌ Failed checks (2 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ⚠️ Warning The PR description is empty except for the template. Required sections (Description and Test Coverage) are completely unfilled, and the PR Checklist is not properly addressed. Fill in the Description section explaining what changes were made and why they reduce host overhead. Add Test Coverage section listing relevant tests. Complete the PR Checklist items appropriately.
Title check ❓ Inconclusive The title describes a real change (reducing host overhead) but is overly vague about the implementation approach (refactoring tactics from int to tuple-based backend/subtactic pairs) and doesn't convey the main technical change. Consider a more specific title that reflects the core change, such as: '[None][fix] Refactor nvfp4 gemm tactics to use backend/subtactic tuples to reduce host overhead'.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (1)

844-860: Bug: Incorrect membership check for tuple-based tactics.

The check "cutlass" in valid_tactics will always fail because valid_tactics is now a List[Tuple[str, int]] (e.g., [("cuda_core", 0), ("cutlass", 1)]). The string "cutlass" will never match a tuple.

This causes the fallback to always use valid_tactics[0] instead of preferring the cutlass backend.

🐛 Proposed fix
         if tactic == -1:
             # Get valid tactics and use first available
             from tensorrt_llm._torch.autotuner import OptimizationProfile
             valid_tactics = self.get_valid_tactics(inputs,
                                                    OptimizationProfile())
             if valid_tactics:
                 # Prefer cutlass as fallback if available, otherwise use first valid tactic
-                tactic = ["cutlass", -1] if "cutlass" in valid_tactics else [
-                    valid_tactics[0], -1
-                ]
+                cutlass_tactics = [t for t in valid_tactics if t[0] == "cutlass"]
+                if cutlass_tactics:
+                    tactic = (cutlass_tactics[0][0], -1)  # Use cutlass with fallback sub_tactic
+                else:
+                    tactic = (valid_tactics[0][0], -1)  # Use first available backend with fallback
             else:

Additionally, the fallback tactic should be a tuple (backend, sub_tactic), not a list [backend, sub_tactic], to maintain type consistency with the rest of the code.

🤖 Fix all issues with AI agents
In @tensorrt_llm/_torch/custom_ops/torch_custom_ops.py:
- Around line 836-838: The parameter 'tactic' is declared as Union[Tuple, int]
but its default is the string "cutlass"; update the signature for consistency by
either adding str to the type hint (Union[Tuple, int, str]) if string backends
are intentional, or change the default to a valid fallback like -1 (i.e.,
tactic: Union[Tuple, int] = -1) so the declared type matches the default; adjust
any downstream usages or docstrings of the 'tactic' parameter accordingly.
- Around line 863-870: The computed variable act_sf_unswizzled is dead
code—remove the torch.ops.trtllm.block_scale_interleave_reverse call and the
act_sf_unswizzled assignment in the branch where backend == "cuda_core" since
CudaCoreNVFP4Runner.forward already unswizzles internally; keep the m and
act_fp4 references only if used elsewhere, and simply return
CudaCoreNVFP4Runner(self.to_userbuffers, self.output_dtype)(inputs, sub_tactic)
without computing act_sf_unswizzled.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b130d58 and dfc104e.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing Python modules, even if only one class or function from a module is used
Python filenames should use snake_case (e.g., some_file.py)
Python classes should use PascalCase (e.g., class SomeClass)
Python functions and methods should use snake_case (e.g., def my_awesome_function():)
Python local variables should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL)
Python constants should use upper snake_case (e.g., MY_CONSTANT)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Use comments in Python for code within a function, or interfaces that are local to a file
Use Google-style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with the format """<type>: Description"""
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block for the main logic

Files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification

Files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
🧠 Learnings (6)
📚 Learning: 2025-11-14T11:22:03.729Z
Learnt from: nzmora-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 9163
File: tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py:107-113
Timestamp: 2025-11-14T11:22:03.729Z
Learning: In TensorRT-LLM AutoDeploy custom ops, when adding hardware capability checks to select between kernel implementations (e.g., cuBLAS vs. CUDA kernel), use descriptive variable names that identify the specific GPU architectures or families being targeted (e.g., `is_blackwell_geforce_or_ada`) rather than generic names like `enable_cuda_core`. This makes it clear that the code is selecting an implementation path based on hardware capabilities, not enabling/disabling hardware features.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-08-21T21:48:35.135Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp:399-417
Timestamp: 2025-08-21T21:48:35.135Z
Learning: CUTLASS extensions in TensorRT-LLM (located under cpp/tensorrt_llm/cutlass_extensions/) are designed to integrate with and extend functionality in the external CUTLASS repository. When analyzing these extensions, their consumers and functionality wiring may exist in the CUTLASS codebase rather than within TensorRT-LLM itself.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-08-22T01:54:35.850Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h:999-1000
Timestamp: 2025-08-22T01:54:35.850Z
Learning: The `internal_cutlass_kernels` directory in TensorRT-LLM is a mirror of an internal NVIDIA repository and maintains its own implementation and API that may diverge from the public `cutlass_kernels` version. API inconsistencies between these two directories are intentional and by design, not bugs to be fixed.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-10-20T16:54:09.824Z
Learnt from: nvchenghaoz
Repo: NVIDIA/TensorRT-LLM PR: 8469
File: tensorrt_llm/_torch/auto_deploy/custom_ops/rms_norm.py:6-6
Timestamp: 2025-10-20T16:54:09.824Z
Learning: In tensorrt_llm/_torch/auto_deploy/custom_ops/rms_norm.py, the import `from ...modules.mamba.layernorm_gated import _layer_norm_fwd` is correct and should not be changed to modules.fla.layernorm_gated. The _layer_norm_fwd function exists in both modules/mamba/layernorm_gated.py and modules/fla/layernorm_gated.py, but the mamba version is the intended implementation for this use case.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-09-23T14:58:05.372Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-12-12T10:07:31.564Z
Learnt from: lirundong
Repo: NVIDIA/TensorRT-LLM PR: 9725
File: tensorrt_llm/_torch/custom_ops/cuda_tile_custom_ops.py:110-178
Timestamp: 2025-12-12T10:07:31.564Z
Learning: In PyTorch custom operators registered with torch.library.custom_op, mutable operators that return None and specify mutates_args do not require a register_fake decorator. Mutation tracking is handled automatically without needing a FakeTensor kernel. This applies to Python custom op definitions in tensorrt_llm/_torch/custom_ops that use mutates_args and return None; verify you are not relying on register_fake in these cases.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (1)
tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py (6)
  • CuteDSLNVFP4BlackwellLinear (334-739)
  • get_valid_tactics (372-493)
  • get_valid_tactics (858-893)
  • get_valid_tactics (1148-1188)
  • get_valid_tactics (1536-1571)
  • get_valid_tactics (1852-1892)
🪛 Ruff (0.14.10)
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py

734-734: Unused method argument: kwargs

(ARG002)


867-867: Local variable act_sf_unswizzled is assigned to but never used

Remove assignment to unused variable act_sf_unswizzled

(F841)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (4)
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (4)

26-28: LGTM!

The conditional import is correctly guarded behind IS_CUTLASS_DSL_AVAILABLE, preventing ImportError when CuteDSL is not compiled/linked.


755-784: LGTM!

The tactic tuple construction correctly wraps each backend's sub-tactics with the backend name, enabling unified dispatch. The pattern is consistent across all backends (cuda_core, cutlass, cublaslt).


812-813: LGTM!

CuteDSL tactic construction follows the same tuple pattern as other backends.


871-881: No action required. The CuteDSLNVFP4BlackwellLinear constructor has to_userbuffers: bool = False as an optional parameter, so both call patterns are valid: instantiation with only output_dtype (line 805) uses the default value, and explicit provision of to_userbuffers (line 879) is also supported. This is standard Python design and will not cause failures.

Likely an incorrect or invalid review comment.

@hyukn
Copy link
Collaborator Author

hyukn commented Jan 8, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30971 [ run ] triggered by Bot. Commit: 7746ffe

@hyukn
Copy link
Collaborator Author

hyukn commented Jan 8, 2026

/bot run --disable-fail-fast

@hyukn
Copy link
Collaborator Author

hyukn commented Jan 8, 2026

/bot run --disable-fail-fast

@hyukn hyukn force-pushed the fix/fp4_gemm_tuning_host_overhead branch from 8842a90 to 63ebe75 Compare January 8, 2026 03:38
@hyukn
Copy link
Collaborator Author

hyukn commented Jan 8, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30980 [ run ] triggered by Bot. Commit: 63ebe75

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30971 [ run ] completed with state ABORTED. Commit: 7746ffe

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30983 [ run ] triggered by Bot. Commit: 63ebe75

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30980 [ run ] completed with state ABORTED. Commit: 63ebe75

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30984 [ run ] triggered by Bot. Commit: 63ebe75

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30983 [ run ] completed with state ABORTED. Commit: 63ebe75

@hyukn hyukn force-pushed the fix/fp4_gemm_tuning_host_overhead branch from 63ebe75 to a549e35 Compare January 8, 2026 08:22
@hyukn
Copy link
Collaborator Author

hyukn commented Jan 8, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31035 [ run ] triggered by Bot. Commit: a549e35

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31035 [ run ] completed with state SUCCESS. Commit: a549e35
/LLM/main/L0_MergeRequest_PR pipeline #23979 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@hyukn
Copy link
Collaborator Author

hyukn commented Jan 9, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31157 [ run ] triggered by Bot. Commit: a549e35

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31157 [ run ] completed with state SUCCESS. Commit: a549e35
/LLM/main/L0_MergeRequest_PR pipeline #24070 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@hyukn
Copy link
Collaborator Author

hyukn commented Jan 12, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31448 [ run ] triggered by Bot. Commit: a549e35

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31448 [ run ] completed with state SUCCESS. Commit: a549e35
/LLM/main/L0_MergeRequest_PR pipeline #24309 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
@hyukn hyukn force-pushed the fix/fp4_gemm_tuning_host_overhead branch from a549e35 to 9550c96 Compare January 12, 2026 07:54
@hyukn
Copy link
Collaborator Author

hyukn commented Jan 12, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31518 [ run ] triggered by Bot. Commit: 9550c96

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31518 [ run ] completed with state SUCCESS. Commit: 9550c96
/LLM/main/L0_MergeRequest_PR pipeline #24367 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@hyukn
Copy link
Collaborator Author

hyukn commented Jan 13, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31644 [ run ] triggered by Bot. Commit: 9550c96

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31644 [ run ] completed with state SUCCESS. Commit: 9550c96
/LLM/main/L0_MergeRequest_PR pipeline #24476 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@hyukn
Copy link
Collaborator Author

hyukn commented Jan 13, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31719 [ run ] triggered by Bot. Commit: 9550c96

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31719 [ run ] completed with state SUCCESS. Commit: 9550c96
/LLM/main/L0_MergeRequest_PR pipeline #24542 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@hyukn
Copy link
Collaborator Author

hyukn commented Jan 13, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31766 [ run ] triggered by Bot. Commit: 9550c96

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31766 [ run ] completed with state SUCCESS. Commit: 9550c96
/LLM/main/L0_MergeRequest_PR pipeline #24582 completed with status: 'SUCCESS'

@hyukn hyukn merged commit 15281de into NVIDIA:main Jan 14, 2026
5 checks passed
Superjomn pushed a commit to Superjomn/TensorRT-LLM that referenced this pull request Jan 14, 2026
…VIDIA#10503)

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants