Skip to content

[https://nvbugs/5774869][infra] Use 2 GPUs to test skip softmax attention on H100.#10420

Merged
bobboli merged 4 commits intoNVIDIA:mainfrom
bobboli:skip_softmax_test_h100
Jan 14, 2026
Merged

[https://nvbugs/5774869][infra] Use 2 GPUs to test skip softmax attention on H100.#10420
bobboli merged 4 commits intoNVIDIA:mainfrom
bobboli:skip_softmax_test_h100

Conversation

@bobboli
Copy link
Collaborator

@bobboli bobboli commented Jan 5, 2026

Summary by CodeRabbit

Release Notes

  • Tests

    • Use 2 H100 GPUs to test Skip Softmax Attention feature to avoid OOM on CI.
  • Chores

    • Updated test configuration and waivers to reflect new and re-enabled test entries

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 5, 2026

/bot run --disable-fail-fast

@bobboli bobboli marked this pull request as ready for review January 5, 2026 17:22
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 5, 2026

📝 Walkthrough

Walkthrough

The changes add a new 2-GPU sparse attention test function for Qwen3-30B-A3B-Instruct-2507 with parametrized target_sparsity values, update test database configurations, and remove waiver entries to re-enable previously skipped tests.

Changes

Cohort / File(s) Summary
Sparse attention test additions
tests/integration/defs/accuracy/test_llm_api_pytorch.py
Added new test method test_skip_softmax_attention_2gpus with parametrized sparsity thresholds for 2-GPU scenarios; updated skip reason string with explicit URL reference (https://nvbugs/5783509) in existing test.
Test list configuration updates
tests/integration/test_lists/test-db/l0_dgx_h100.yml
Removed MPI test entry; added new "Skip softmax attention tests" section with three parametrized test variants (target_sparsity 0.0, 0.5, 0.9).
Waiver and test list cleanup
tests/integration/test_lists/test-db/l0_h100.yml, tests/integration/test_lists/waives.txt
Removed three commented waiver lines for Qwen3 30B tests from test database; removed three skip entries for TestQwen3_30B_A3B_Instruct_2507 from waives file, effectively re-enabling those tests.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ⚠️ Warning PR description is essentially empty; all required sections (Description, Test Coverage, PR Checklist) lack substantive content beyond template placeholders. Fill in the Description section with details on why 2-GPU testing is needed, complete the Test Coverage section listing relevant tests (test_skip_softmax_attention_2gpus variants), and check off applicable PR Checklist items.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly and specifically describes the main change: configuring 2-GPU testing for skip softmax attention on H100, with a proper NVBugs ticket reference and [infra] type designation.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30610 [ run ] triggered by Bot. Commit: d18bef2

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

3858-3890: Align 2‑GPU skip‑softmax test with existing Hopper/Blackwell gating

The new 2‑GPU test currently runs unconditionally on whatever GPU pytest is invoked with, unlike the 1‑GPU variant which is Hopper‑only and skips on SM ≥ 100 due to bug 5783509. For consistent behavior across environments (local runs, future test-list entries on Blackwell, etc.), consider mirroring the same guards here.

Proposed changes to mirror arch / bug gating
 class TestQwen3_30B_A3B_Instruct_2507(LlmapiAccuracyTestHarness):
@@
-    @skip_pre_hopper
-    @pytest.mark.parametrize(
+    @skip_pre_hopper
+    @pytest.mark.parametrize(
         "target_sparsity,thr_prefill,thr_decode",
@@
     def test_skip_softmax_attention(self, target_sparsity: float,
                                     thr_prefill: float, thr_decode: float):
@@
         if get_sm_version() >= 100:
             pytest.skip("https://nvbugs/5783509: Bug to be fixed on Blackwell")
@@
-    @pytest.mark.parametrize(
+    @skip_pre_hopper
+    @pytest.mark.parametrize(
         "target_sparsity,thr_prefill,thr_decode",
@@
     def test_skip_softmax_attention_2gpus(self, target_sparsity: float,
                                           thr_prefill: float,
                                           thr_decode: float):
+        if get_sm_version() >= 100:
+            pytest.skip("https://nvbugs/5783509: Bug to be fixed on Blackwell")
+
         sparse_attention_config = SkipSoftmaxAttentionConfig(
             threshold_scale_factor={
                 "prefill": thr_prefill,
                 "decode": thr_decode,
             })
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ea380ff and d18bef2.

📒 Files selected for processing (4)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tests/integration/test_lists/test-db/l0_dgx_h100.yml
  • tests/integration/test_lists/test-db/l0_h100.yml
  • tests/integration/test_lists/waives.txt
💤 Files with no reviewable changes (2)
  • tests/integration/test_lists/waives.txt
  • tests/integration/test_lists/test-db/l0_h100.yml
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used
Python files should use snake_case naming: some_file.py
Python classes should use PascalCase naming: class SomeClass
Python functions and methods should use snake_case naming: def my_awesome_function():
Python local variables should use snake_case naming: my_variable = ...
Python variable names that start with a number should be prefixed with 'k': k_99th_percentile = ...
Python global variables should use upper snake_case with prefix 'G': G_MY_GLOBAL = ...
Python constants should use upper snake_case naming: MY_CONSTANT = ...
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings in Python for classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible, using the else block for logic

Files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
**/*.{cpp,h,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the year of its latest meaningful modification

Files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
🧠 Learnings (10)
📓 Common learnings
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7781
File: tests/integration/test_lists/waives.txt:313-313
Timestamp: 2025-09-17T02:48:52.732Z
Learning: In TensorRT-LLM, `tests/integration/test_lists/waives.txt` is specifically for waiving/skipping tests, while other test list files like those in `test-db/` and `qa/` directories are for different test execution contexts (pre-merge, post-merge, QA tests). The same test appearing in both waives.txt and execution list files is intentional - the test is part of test suites but will be skipped due to the waiver.
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
📚 Learning: 2025-09-17T02:48:52.732Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7781
File: tests/integration/test_lists/waives.txt:313-313
Timestamp: 2025-09-17T02:48:52.732Z
Learning: In TensorRT-LLM, `tests/integration/test_lists/waives.txt` is specifically for waiving/skipping tests, while other test list files like those in `test-db/` and `qa/` directories are for different test execution contexts (pre-merge, post-merge, QA tests). The same test appearing in both waives.txt and execution list files is intentional - the test is part of test suites but will be skipped due to the waiver.

Applied to files:

  • tests/integration/test_lists/test-db/l0_dgx_h100.yml
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tests/integration/test_lists/test-db/l0_dgx_h100.yml
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • tests/integration/test_lists/test-db/l0_dgx_h100.yml
📚 Learning: 2025-10-22T06:53:47.017Z
Learnt from: xinhe-nv
Repo: NVIDIA/TensorRT-LLM PR: 8534
File: scripts/format_test_list.py:1-6
Timestamp: 2025-10-22T06:53:47.017Z
Learning: The file `scripts/format_test_list.py` in the TensorRT-LLM repository does not require the NVIDIA Apache-2.0 copyright header.

Applied to files:

  • tests/integration/test_lists/test-db/l0_dgx_h100.yml
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").

Applied to files:

  • tests/integration/test_lists/test-db/l0_dgx_h100.yml
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.

Applied to files:

  • tests/integration/test_lists/test-db/l0_dgx_h100.yml
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/integration/test_lists/test-db/l0_dgx_h100.yml
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.

Applied to files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
📚 Learning: 2025-08-15T06:46:53.813Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:53.813Z
Learning: In the TensorRT-LLM KV cache manager, SWA (Sliding Window Attention) combined with beam search is currently in a broken/non-functional state and is planned for future rework. During preparatory refactoring phases, code related to SWA+beam search may intentionally remain in a non-working state until the broader rework is completed.

Applied to files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
🧬 Code graph analysis (1)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)
tensorrt_llm/llmapi/llm_args.py (2)
  • SkipSoftmaxAttentionConfig (311-337)
  • KvCacheConfig (1598-1742)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

3845-3846: Blackwell skip reason update is consistent

Using the explicit nvbugs URL for the Blackwell bug keeps the skip rationale clear and aligned with the existing SM>=100 guard.

tests/integration/test_lists/test-db/l0_dgx_h100.yml (1)

50-53: 2‑GPU skip‑softmax tests are correctly wired into the H100 2‑GPU config

The three test entries match the new parametrized test names and are scoped to exactly 2 H100 GPUs in pre‑merge PyTorch MPI runs, which aligns with the new 2‑GPU skip‑softmax coverage.

If the removal of unittest/llmapi/test_mpi_session.py::test_llmapi_launch_multiple_tasks here was not intentional, please double‑check that this MPI coverage still exists in other test lists or contexts.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30610 [ run ] completed with state SUCCESS. Commit: d18bef2
/LLM/main/L0_MergeRequest_PR pipeline #23619 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 6, 2026

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30663 [ run ] triggered by Bot. Commit: d18bef2

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30663 [ run ] completed with state SUCCESS. Commit: d18bef2
/LLM/main/L0_MergeRequest_PR pipeline #23659 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 6, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30713 [ run ] triggered by Bot. Commit: 798b819

@bobboli bobboli enabled auto-merge (squash) January 6, 2026 11:17
@tensorrt-cicd
Copy link
Collaborator

PR_Github #30713 [ run ] completed with state SUCCESS. Commit: 798b819
/LLM/main/L0_MergeRequest_PR pipeline #23699 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 6, 2026

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30771 [ run ] triggered by Bot. Commit: 798b819

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30771 [ run ] completed with state SUCCESS. Commit: 798b819
/LLM/main/L0_MergeRequest_PR pipeline #23754 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 7, 2026

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30815 [ run ] triggered by Bot. Commit: 798b819

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30815 [ run ] completed with state SUCCESS. Commit: 798b819
/LLM/main/L0_MergeRequest_PR pipeline #23794 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 8, 2026

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30967 [ run ] triggered by Bot. Commit: 798b819

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30967 [ run ] completed with state SUCCESS. Commit: 798b819
/LLM/main/L0_MergeRequest_PR pipeline #23926 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 8, 2026

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31072 [ run ] triggered by Bot. Commit: 798b819

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31072 [ run ] completed with state DISABLED
CI server is currently disabled for scheduled maintenance. Estimated completion time: 6 AM PST on 12/29.

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 9, 2026

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31145 [ run ] triggered by Bot. Commit: 798b819

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31402 [ run ] triggered by Bot. Commit: 30cb026

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31402 [ run ] completed with state DISABLED
CI server is currently disabled for scheduled maintenance. Estimated completion time: 8 AM PST on 1/11.

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 12, 2026

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31463 [ run ] triggered by Bot. Commit: 30cb026

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31463 [ run ] completed with state SUCCESS. Commit: 30cb026
/LLM/main/L0_MergeRequest_PR pipeline #24323 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 12, 2026

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31495 [ run ] triggered by Bot. Commit: 30cb026

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31495 [ run ] completed with state SUCCESS. Commit: 30cb026
/LLM/main/L0_MergeRequest_PR pipeline #24347 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 12, 2026

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31555 [ run ] triggered by Bot. Commit: 30cb026

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31555 [ run ] completed with state SUCCESS. Commit: 30cb026
/LLM/main/L0_MergeRequest_PR pipeline #24398 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Use lower KV cache free GPU mem frac.

Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Remove target_sparsity_0 from CI.

Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
@bobboli bobboli force-pushed the skip_softmax_test_h100 branch from 30cb026 to b970446 Compare January 12, 2026 18:40
@bobboli
Copy link
Collaborator Author

bobboli commented Jan 12, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31609 [ run ] triggered by Bot. Commit: b970446

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31609 [ run ] completed with state FAILURE. Commit: b970446
/LLM/main/L0_MergeRequest_PR pipeline #24445 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 13, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31689 [ run ] triggered by Bot. Commit: b970446

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31689 [ run ] completed with state ABORTED. Commit: b970446
LLM/main/L0_MergeRequest_PR #24515 (Blue Ocean) completed with status: ABORTED

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 14, 2026

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31894 [ run ] triggered by Bot. Commit: b970446

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31894 [ run ] completed with state SUCCESS. Commit: b970446
/LLM/main/L0_MergeRequest_PR pipeline #24697 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bobboli
Copy link
Collaborator Author

bobboli commented Jan 14, 2026

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31967 [ run ] triggered by Bot. Commit: b970446

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31967 [ run ] completed with state SUCCESS. Commit: b970446
/LLM/main/L0_MergeRequest_PR pipeline #24764 completed with status: 'SUCCESS'

@bobboli bobboli merged commit 582dec5 into NVIDIA:main Jan 14, 2026
4 of 5 checks passed
- accuracy/test_llm_api_pytorch.py::TestQwen3_30B_A3B::test_fp8[latency-torch_compile=True]
- accuracy/test_llm_api_pytorch.py::TestQwen3_30B_A3B::test_dummy_load_format
# Waive known failures in https://nvbugs/5774869
# - accuracy/test_llm_api_pytorch.py::TestQwen3_30B_A3B_Instruct_2507::test_skip_softmax_attention[target_sparsity_0.0] TIMEOUT (90)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bobboli Would you fill a new PR to add back the tests?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants