Skip to content

Comments

[None][fix] Fix enable_alltoall passed to CutlassFusedMoE#11016

Merged
syuoni merged 3 commits intoNVIDIA:mainfrom
syuoni:fix-config-moe
Jan 29, 2026
Merged

[None][fix] Fix enable_alltoall passed to CutlassFusedMoE#11016
syuoni merged 3 commits intoNVIDIA:mainfrom
syuoni:fix-config-moe

Conversation

@syuoni
Copy link
Collaborator

@syuoni syuoni commented Jan 27, 2026

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added runtime parameter to control alltoall communication behavior for optimized performance configurations.
  • Performance Improvements

    • Enhanced layer-wise benchmarking utilities with improved token counting and resource allocation logic.
  • Refactor

    • Streamlined internal communication strategy detection for simplified and more predictable behavior.

✏️ Tip: You can customize this high-level summary in your review settings.

@syuoni syuoni requested review from xxi-nv and yuantailing January 27, 2026 03:23
@syuoni syuoni requested review from a team as code owners January 27, 2026 03:23
@syuoni syuoni requested review from QiJune and kaiyux January 27, 2026 03:23
@syuoni
Copy link
Collaborator Author

syuoni commented Jan 27, 2026

/bot run --disable-fail-fast

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 27, 2026

📝 Walkthrough

Walkthrough

The changes update benchmark configuration logic and refactor the MOE alltoall usage detection mechanism. The benchmark runner now pre-computes context sequence length and batch size parameters for the prefill path, while MOE modules replace internal alltoall detection with an explicit enable_alltoall flag that can be controlled at invocation time.

Changes

Cohort / File(s) Summary
Benchmark Configuration
examples/layer_wise_benchmarks/run.py
Modified context sequence length and batch size computation for GEN run type; pre-compute ctx_seq_len_q from maximum of seq_len_kv_cache_list and ctx_batch_size as minimum of max_batch_size and floor(20480/ctx_seq_len_q); pass pre-computed max_num_tokens to prefill Runner instead of per-run derivation
MOE Alltoall Flag Refactoring
tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py, tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
Replaced _is_using_alltoall() method with explicit enable_alltoall flag; updated determine_communication_method logic to reference flag; modified _forward_multiple_chunks and _get_backend_kwargs to use enable_alltoall for chunking and streaming decisions; added enable_alltoall parameter to CutlassFusedMoE.run_moe() API for per-call override

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~30 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely empty with only template placeholders. The Description and Test Coverage sections contain no actual content explaining the changes or test strategy. Please fill in the Description section explaining what issue is being fixed and why, and the Test Coverage section listing relevant tests that validate these changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main fix: addressing how enable_alltoall is passed to CutlassFusedMoE, which aligns with the code changes.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)

686-696: forward_chunk should pass enable_alltoall=True to run_moe when using external alltoall distribution.

When self.enable_alltoall=True, forward_chunk performs alltoall redistribution externally via Python (lines 570-663) before calling run_moe. However, the kernel call (lines 686-696) does not pass the enable_alltoall parameter, so it defaults to False.

The C++ kernel uses enable_alltoall to determine whether to skip filling invalid output tokens with zeros in the finalize step (moe_kernels.cu:1913-1917). When alltoall redistribution is used, the comment states: "If all-to-all comm is enabled, finalizeMoeRouting doesn't need to fill the invalid output tokens with zeros." This logic applies regardless of whether alltoall is handled internally by the kernel or externally by Python.

By not passing enable_alltoall=True, the kernel will unnecessarily attempt to zero invalid outputs even though the data was already properly redistributed externally. Pass enable_alltoall=self.enable_alltoall to the run_moe call so the kernel behavior aligns with the actual data distribution method.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33664 [ run ] triggered by Bot. Commit: 7d8d2bb

Copy link
Collaborator

@xxi-nv xxi-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix.

@syuoni
Copy link
Collaborator Author

syuoni commented Jan 27, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33686 [ run ] triggered by Bot. Commit: 2e7c0a0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33686 [ run ] completed with state SUCCESS. Commit: 2e7c0a0
/LLM/main/L0_MergeRequest_PR pipeline #25990 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@syuoni
Copy link
Collaborator Author

syuoni commented Jan 27, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33730 [ run ] triggered by Bot. Commit: 2e7c0a0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33730 [ run ] completed with state SUCCESS. Commit: 2e7c0a0
/LLM/main/L0_MergeRequest_PR pipeline #26015 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@syuoni
Copy link
Collaborator Author

syuoni commented Jan 28, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33794 [ run ] triggered by Bot. Commit: 2e7c0a0

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
@tensorrt-cicd
Copy link
Collaborator

PR_Github #33794 [ run ] completed with state SUCCESS. Commit: 2e7c0a0
/LLM/main/L0_MergeRequest_PR pipeline #26064 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
@syuoni
Copy link
Collaborator Author

syuoni commented Jan 28, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33873 [ run ] triggered by Bot. Commit: ee8fe55

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33873 [ run ] completed with state SUCCESS. Commit: ee8fe55
/LLM/main/L0_MergeRequest_PR pipeline #26121 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@syuoni
Copy link
Collaborator Author

syuoni commented Jan 29, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33933 [ run ] triggered by Bot. Commit: ee8fe55

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33933 [ run ] completed with state SUCCESS. Commit: ee8fe55
/LLM/main/L0_MergeRequest_PR pipeline #26173 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@syuoni syuoni merged commit 34a730a into NVIDIA:main Jan 29, 2026
5 checks passed
@syuoni syuoni deleted the fix-config-moe branch January 29, 2026 04:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants