Skip to content

Comments

[#11455][fix] Fallback to triton_ssm for nvfp4 quantization#11456

Merged
galagam merged 1 commit intoNVIDIA:mainfrom
nv-auto-deploy:gagam/war_for_fi_ssm_nvfp4
Feb 13, 2026
Merged

[#11455][fix] Fallback to triton_ssm for nvfp4 quantization#11456
galagam merged 1 commit intoNVIDIA:mainfrom
nv-auto-deploy:gagam/war_for_fi_ssm_nvfp4

Conversation

@galagam
Copy link
Collaborator

@galagam galagam commented Feb 11, 2026

Description

nvfp4 + flashinfer_ssm mamba backend are incompatible.
nanov3 and superv3 example configs were recently updated to use flashinfer_ssm, breaking the nvfp4 support.
This is a temporary WAR. Filed bug #11455 to track. For now, fall back to triton_ssm.

Test Coverage

In follow up PR to refactor the accuracy tests to load the example config instead of a hardcoded config #11458

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

Release Notes

  • Bug Fixes
    • Enhanced compatibility between NVFP4 quantization and SSM backends, with automatic fallback mechanism to ensure consistent operation.

@galagam galagam requested a review from a team as a code owner February 11, 2026 17:36
@galagam galagam self-assigned this Feb 11, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 11, 2026

📝 Walkthrough

Walkthrough

The changes add a new _apply method to the SSMCacheTransform class that detects when NVFP4 quantization is active with the flashinfer_ssm backend, logs a warning, and switches the backend to triton_ssm before delegating to the base class implementation.

Changes

Cohort / File(s) Summary
SSM Cache Transform Backend Compatibility
tensorrt_llm/_torch/auto_deploy/transform/library/ssm_cache.py
Added _apply method that checks for NVFP4 quantization and flashinfer_ssm backend combination, logs a compatibility warning, and automatically switches to triton_ssm backend with fallback to base class processing.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main change: implementing a fallback to triton_ssm for nvfp4 quantization, matching the PR's primary objective.
Description check ✅ Passed The PR description clearly explains the issue (nvfp4 + flashinfer_ssm incompatibility), the solution (fall back to triton_ssm), and references the tracking bug #11455 appropriately.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/auto_deploy/transform/library/ssm_cache.py`:
- Line 1: Add the required NVIDIA copyright header with the Apache License 2.0
at the top of this module (replace or precede the existing module docstring """A
set of transforms to handle SSM cache transforms."""), using the current year
2026 and the standard NVIDIA/Apache-2.0 boilerplate; ensure the header includes
the copyright owner (NVIDIA CORPORATION) and the full Apache-2.0 notice so the
file header meets the project's "year of latest meaningful modification"
requirement.
- Around line 25-26: The assignment to is_nvfp4 calls .upper() on
qcfg.get("quant_algo") which can be None; update the logic around
factory.get_quant_config() and the is_nvfp4 computation to guard against None by
coercing the value to a string or using a fallback (e.g., use
(qcfg.get("quant_algo") or "") or str(qcfg.get("quant_algo") or "") before
calling .upper()) so that None does not trigger an AttributeError when computing
is_nvfp4.
🧹 Nitpick comments (1)
tensorrt_llm/_torch/auto_deploy/transform/library/ssm_cache.py (1)

27-33: Persistent mutation of self.config.backend — confirm this is safe.

Line 32 permanently mutates self.config.backend. If this SSMCacheTransform instance is ever reused (e.g., applied to multiple models), the fallback to triton_ssm will persist silently without the warning being logged again (since the condition on line 27 won't match). If transforms are single-use per pipeline run, this is fine.

@galagam
Copy link
Collaborator Author

galagam commented Feb 11, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35664 [ run ] triggered by Bot. Commit: 9b3ab4f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35664 [ run ] completed with state SUCCESS. Commit: 9b3ab4f
/LLM/main/L0_MergeRequest_PR pipeline #27543 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@galagam
Copy link
Collaborator Author

galagam commented Feb 12, 2026

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35711 [ run ] triggered by Bot. Commit: 9b3ab4f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35711 [ run ] completed with state SUCCESS. Commit: 9b3ab4f
/LLM/main/L0_MergeRequest_PR pipeline #27577 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

nvfp4 + flashinfer_ssm mamba backend is incompatible.
Will be enabled as part of SuperV3 nvfp4 enablement.
For now, falls back to triton_ssm

Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
@galagam galagam force-pushed the gagam/war_for_fi_ssm_nvfp4 branch from 9b3ab4f to 19390a0 Compare February 12, 2026 06:23
@galagam
Copy link
Collaborator Author

galagam commented Feb 12, 2026

/bot run

@galagam
Copy link
Collaborator Author

galagam commented Feb 12, 2026

/bot run --stage-list "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35737 [ run ] triggered by Bot. Commit: 19390a0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35739 [ run ] triggered by Bot. Commit: 19390a0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35739 [ run ] completed with state SUCCESS. Commit: 19390a0
/LLM/main/L0_MergeRequest_PR pipeline #27606 (Partly Tested) completed with status: 'SUCCESS'

@galagam
Copy link
Collaborator Author

galagam commented Feb 12, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35775 [ run ] triggered by Bot. Commit: 19390a0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35775 [ run ] completed with state SUCCESS. Commit: 19390a0
/LLM/main/L0_MergeRequest_PR pipeline #27631 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@galagam galagam requested a review from nvchenghaoz February 12, 2026 19:13
@galagam galagam merged commit d0e7ba1 into NVIDIA:main Feb 13, 2026
5 checks passed
ekou24 pushed a commit to ekou24/TensorRT-LLM that referenced this pull request Feb 16, 2026
…IDIA#11456)

Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants