[#10245][feat] AutoDeploy: Add Minimax M2 support#10525
[#10245][feat] AutoDeploy: Add Minimax M2 support#10525bmarimuthu-nv merged 7 commits intoNVIDIA:mainfrom
Conversation
tensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.py
Outdated
Show resolved
Hide resolved
|
@coderabbitai summary |
✅ Actions performedSummary regeneration triggered. |
📝 WalkthroughWalkthroughThe changes introduce support for the MiniMax-M2 model in AutoDeploy through torch-export-compatible MoE implementation, distributed RMSNorm sharding for improved performance across multiple devices, and comprehensive unit and functional tests. Includes minor improvements to flashinfer attention and diagnostic error messages. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
/bot run |
a1a1d79 to
41af3fe
Compare
|
/bot run |
tensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.py
Outdated
Show resolved
Hide resolved
8e80cd3 to
d20057f
Compare
tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_minimax_m2_patches.py
Show resolved
Hide resolved
d26af90 to
75f84a3
Compare
|
/bot run |
2 similar comments
|
/bot run |
|
/bot run |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/auto_deploy/models/patches/minimax_m2.py`:
- Around line 1-7: Add the required NVIDIA copyright header (with the year of
latest meaningful modification) at the very top of this file, before the
existing module docstring that explains the MiniMax-M2 MoE patch; ensure the
header follows the project's standard header format and remains a top-of-file
comment so the docstring and the patched AutoModelForCausalLM logic (referenced
in the module text) remain unchanged.
🧹 Nitpick comments (6)
tensorrt_llm/_torch/auto_deploy/utils/node_utils.py (1)
954-958: Improved diagnostic message for assertion failures.The expanded assertion message now includes the terminating linear node name and the list of opening linear node names, which will help with debugging when this assertion fails.
Consider also including the name of the linear node being checked (
linear_nodes[start_lin_index].name) in the message for completeness, since that's the actual node the assertion is validating.💡 Optional: Include checked node's name
assert linear_nodes[start_lin_index] in opening_linear_nodes, ( - f"Linear node not found in opening linear nodes - " + f"Linear node {linear_nodes[start_lin_index].name} not found in opening linear nodes - " f"terminating_linear_node:{terminating_linear_node.name}, " f"opening_linear_nodes: {[n.name for n in opening_linear_nodes]}" )tests/unittest/_torch/auto_deploy/unit/multigpu/transformations/library/test_rmsnorm_sharding.py (1)
163-219: LGTM!The test correctly validates that per-head norm RMSNorm ops are not transformed to sharded variants, which is the expected behavior for GLM-style per-head normalization.
Consider removing or converting the
tests/unittest/_torch/auto_deploy/unit/multigpu/custom_ops/test_sharded_rmsnorm.py (1)
93-99: Consider using the project'sall_gatherwrapper for consistency.Line 99 uses
dist.all_gatherdirectly, while the codebase provides a wrapper attensorrt_llm/_torch/auto_deploy/distributed/common.py:all_gatherthat handles the default process group. This is likely fine sinceinitialize()is called, but using the wrapper would be more consistent with other code in the project.tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_minimax_m2_patches.py (1)
63-65: Consider using more specific exception handling.The broad
Exceptioncatch could mask unexpected errors. Consider catching more specific exceptions likeValueError,RuntimeError, orOSErrorthat are typical for model loading failures.Suggested improvement
- except Exception as e: + except (ValueError, RuntimeError, OSError, KeyError) as e: print(f"Error extracting layer: {e}") return Nonetensorrt_llm/_torch/auto_deploy/transform/library/sharding.py (2)
1963-1978: Edge case: slice dimension validation could be more robust.The logic at lines 1971-1978 handles the case where slice dimensions don't sum to the weight dimension. However, if
fused_weight_dims[-1] > weight_dim, the adjustment on line 1973 could result in a negative value whensum(fused_weight_dims[:-1]) > weight_dim.Consider adding validation for this edge case:
Suggested validation
if sum(fused_weight_dims) != weight_dim: if fused_weight_dims[-1] > weight_dim: - fused_weight_dims[-1] = weight_dim - sum(fused_weight_dims[:-1]) + adjusted = weight_dim - sum(fused_weight_dims[:-1]) + if adjusted <= 0: + ad_logger.warning( + f"Invalid slice dimensions: adjusted last dim would be {adjusted}. Skipping." + ) + return + fused_weight_dims[-1] = adjusted
2055-2059: Minor: Usenext(iter(...))for single element access.Static analysis suggests using
next(iter(weight_node.users))instead oflist(weight_node.users)[0]for cleaner single element access.Suggested change
- user_node = list(weight_node.users)[0] + user_node = next(iter(weight_node.users))
|
/bot run |
2 similar comments
|
/bot run |
|
/bot run |
|
PR_Github #33319 [ run ] triggered by Bot. Commit: |
|
PR_Github #33319 [ run ] completed with state
|
|
/bot run |
|
PR_Github #33623 [ run ] triggered by Bot. Commit: |
|
PR_Github #33623 [ run ] completed with state
|
0656677 to
07a8174
Compare
|
/bot run |
|
PR_Github #33793 [ run ] triggered by Bot. Commit: |
|
PR_Github #33793 [ run ] completed with state
|
Signed-off-by: Balamurugan Marimuthu <246387390+bmarimuthu-nv@users.noreply.github.com>
Signed-off-by: Balamurugan Marimuthu <246387390+bmarimuthu-nv@users.noreply.github.com>
Signed-off-by: Balamurugan Marimuthu <246387390+bmarimuthu-nv@users.noreply.github.com>
Signed-off-by: Balamurugan Marimuthu <246387390+bmarimuthu-nv@users.noreply.github.com>
Signed-off-by: Balamurugan Marimuthu <246387390+bmarimuthu-nv@users.noreply.github.com>
Signed-off-by: Balamurugan Marimuthu <246387390+bmarimuthu-nv@users.noreply.github.com>
Signed-off-by: Balamurugan Marimuthu <246387390+bmarimuthu-nv@users.noreply.github.com>
07a8174 to
fa6cf7e
Compare
|
/bot run |
|
PR_Github #33893 [ run ] triggered by Bot. Commit: |
|
/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1" |
|
PR_Github #33911 [ run ] triggered by Bot. Commit: |
|
PR_Github #33911 [ run ] completed with state |
Summary by CodeRabbit
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Fixes #10245
AD_DUMP_GRAPHS_DIR=<dir to dump graphs>Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.