Skip to content

Comments

[https://nvbugs/5820874][fix] Adjust deepgemm tuning buckets to cover larger num_tokens's scope#11259

Merged
chenfeiz0326 merged 10 commits intoNVIDIA:mainfrom
chenfeiz0326:chenfeiz/add-deepgemm-tuning-params-for-flexible-jit
Feb 5, 2026
Merged

[https://nvbugs/5820874][fix] Adjust deepgemm tuning buckets to cover larger num_tokens's scope#11259
chenfeiz0326 merged 10 commits intoNVIDIA:mainfrom
chenfeiz0326:chenfeiz/add-deepgemm-tuning-params-for-flexible-jit

Conversation

@chenfeiz0326
Copy link
Collaborator

@chenfeiz0326 chenfeiz0326 commented Feb 4, 2026

Summary by CodeRabbit

  • Refactor
    • In autotuner, SwapAB deepgemm kernel only uses tactic=0
    • Delete tune_max_num_tokens in DeepGemm SwapAB and use curr_max_num_tokens as default tune_max_num_tokens (but still clamp to [4096, 8192]) so that tuning bucket is more flexible.
    • Enhance test_perf_sanity.py to display server failure errors directly in the Jenkins pipeline, allowing developers to view them immediately.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
@chenfeiz0326 chenfeiz0326 requested a review from hyukn February 4, 2026 05:29
@chenfeiz0326 chenfeiz0326 self-assigned this Feb 4, 2026
@chenfeiz0326 chenfeiz0326 requested a review from a team as a code owner February 4, 2026 05:29
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 4, 2026

📝 Walkthrough

Walkthrough

Modified the tuning strategy for fp8SwapABGemmRunner and fp8_swap_ab_gemm by introducing a default bucket generator function, simplifying tactic selection logic, and introducing sentinel-based configuration using tune_max_num_tokens to toggle between explicit and default tuning behavior.

Changes

Cohort / File(s) Summary
Tuning Configuration Strategy
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
Added default_deep_gemm_tuning_buckets() utility function for fallback bucket generation. Replaced hardcoded TuningConfig with generic empty initialization. Implemented sentinel-based logic: when tune_max_num_tokens > 0, applies explicit tuning; when ≤ 0, uses default autotuning buckets. Simplified get_valid_tactics() to always return [0] and forward path to unconditionally use deep_gemm.fp8_gemm_nt. Updated fp8_swap_ab_gemm signature default from 4096 to -1 for tune_max_num_tokens.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning The PR description is incomplete and lacks critical details. It contains only a CodeRabbit summary without substantive information in the required Description and Test Coverage sections. Add a clear explanation of the issue being fixed, the solution implemented, and list the specific tests that validate these changes. Ensure Description and Test Coverage sections are properly filled out.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly and specifically describes the main change: adjusting deepgemm tuning buckets to cover a larger scope of num_tokens, which aligns with the code changes that introduce flexible tuning bucket generation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
@chenfeiz0326
Copy link
Collaborator Author

/bot run --disable-fail-fast --stage-list "DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-1,DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-2,DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-3"

Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
@chenfeiz0326
Copy link
Collaborator Author

/bot run --disable-fail-fast --stage-list "DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-1,DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-2,DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-3"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34732 [ run ] triggered by Bot. Commit: ac8d2ec

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34732 [ run ] completed with state SUCCESS. Commit: ac8d2ec
/LLM/main/L0_MergeRequest_PR pipeline #26797 (Partly Tested) completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@chenfeiz0326
Copy link
Collaborator Author

/bot run --disable-fail-fast --stage-list "DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-1,DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-2,DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-3"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34786 [ run ] triggered by Bot. Commit: ac8d2ec

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34786 [ run ] completed with state DISABLED
CI server is currently disabled for unplanned maintenance. Estimated completion time: 8 AM PST on 2/4.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34815 [ run ] triggered by Bot. Commit: ac8d2ec

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34815 [ run ] completed with state SUCCESS. Commit: ac8d2ec
/LLM/main/L0_MergeRequest_PR pipeline #26855 (Partly Tested) completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
@chenfeiz0326
Copy link
Collaborator Author

/bot run --disable-fail-fast --stage-list "DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-1,DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-2,DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-3"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34901 [ run ] triggered by Bot. Commit: deecf48

Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
@chenfeiz0326
Copy link
Collaborator Author

/bot run --disable-fail-fast --stage-list "DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-1,DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-2,DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-3"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34901 [ run ] completed with state SUCCESS. Commit: deecf48
/LLM/main/L0_MergeRequest_PR pipeline #26922 (Partly Tested) completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34905 [ run ] triggered by Bot. Commit: 13a5d5a

Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Copy link
Collaborator

@Barry-Delaney Barry-Delaney left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks for the fix!

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34905 [ run ] completed with state SUCCESS. Commit: 13a5d5a
/LLM/main/L0_MergeRequest_PR pipeline #26926 (Partly Tested) completed with status: 'SUCCESS'

Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
@chenfeiz0326
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34946 [ run ] triggered by Bot. Commit: fbfd06a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34946 [ run ] completed with state SUCCESS. Commit: fbfd06a
/LLM/main/L0_MergeRequest_PR pipeline #26960 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@chenfeiz0326 chenfeiz0326 merged commit eae480b into NVIDIA:main Feb 5, 2026
7 checks passed
SchumiDing pushed a commit to SchumiDing/TensorRT-LLM that referenced this pull request Feb 6, 2026
… larger num_tokens's scope (NVIDIA#11259)

Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Feb 13, 2026
… larger num_tokens's scope (NVIDIA#11259)

Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com>
inciaf pushed a commit to inciaf/trtllm-energy-monitoring that referenced this pull request Feb 18, 2026
… larger num_tokens's scope (NVIDIA#11259)

Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Ahmet Inci <ainci@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants