Skip to content

Comments

[None][chore] update model list#11364

Merged
tcherckez-nvidia merged 1 commit intoNVIDIA:mainfrom
tcherckez-nvidia:model-list-update-feb8
Feb 9, 2026
Merged

[None][chore] update model list#11364
tcherckez-nvidia merged 1 commit intoNVIDIA:mainfrom
tcherckez-nvidia:model-list-update-feb8

Conversation

@tcherckez-nvidia
Copy link
Collaborator

@tcherckez-nvidia tcherckez-nvidia commented Feb 8, 2026

Summary by CodeRabbit

Release Notes

  • Chores
    • Updated model configurations in the registry with parameter adjustments, including depth modifications and configuration file references.
    • Disabled several models from the active registry pending necessary compatibility improvements and configuration refinements.
    • Added new model configuration file to support enhanced dtype consistency and improved compatibility across model variants.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com>
@tcherckez-nvidia tcherckez-nvidia requested review from a team as code owners February 8, 2026 14:30
@tcherckez-nvidia tcherckez-nvidia changed the title update model list [None][chore] update model list Feb 8, 2026
@tcherckez-nvidia
Copy link
Collaborator Author

fix #10980

@tcherckez-nvidia
Copy link
Collaborator Author

/bot skip --comment "AD model list update"

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 8, 2026

📝 Walkthrough

Walkthrough

These changes modify model configuration files in the auto_deploy/model_registry directory. A numeric parameter is adjusted in one existing config, a new dtype configuration file is added for Qwen3-VL, and the main models registry is significantly restructured with multiple model entries disabled, specific models renamed, and configuration sets adjusted or expanded.

Changes

Cohort / File(s) Summary
Configuration value adjustment
examples/auto_deploy/model_registry/configs/num_hidden_layers_5.yaml
Reduced num_hidden_layers parameter from 10 to 5, decreasing model depth for testing purposes.
New dtype configuration
examples/auto_deploy/model_registry/configs/qwen3_vl.yaml
Added new configuration file to enforce torch_dtype: bfloat16 in model kwargs, addressing dtype consistency issues.
Model registry restructuring
examples/auto_deploy/model_registry/models.yaml
Disabled or removed multiple model entries (Nemotron, DeepSeek variants, and others); renamed Mistral-Large-Instruct version with updated shard configuration; expanded yaml_extra configurations for Qwen3-VL, GPT-oss-120b, and other models to include multimodal and custom parameter files.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is incomplete. Required sections like Description and Test Coverage are left empty with only placeholder comments. Only the checklist checkbox is marked; no explanations are provided for the changes or testing strategy. Complete the Description section explaining what model changes were made and why. Add Test Coverage section detailing how these configuration changes were validated.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: updating the model list. It is concise and directly related to the file modifications in models.yaml and associated configuration changes.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

No actionable comments were generated in the recent review. 🎉

🧹 Recent nitpick comments
examples/auto_deploy/model_registry/configs/num_hidden_layers_5.yaml (1)

1-4: Comment is now stale — this config is also used by openai/gpt-oss-120b.

Lines 1-2 mention only DeepSeek V3 and R1, but models.yaml now also references this file for openai/gpt-oss-120b (line 219). Consider generalizing the comment, e.g., "Configuration to reduce layers for large models that are too large for full testing."

Suggested comment update
-# Configuration for DeepSeek V3 and R1 with reduced layers
-# Full models are too large, so we test with limited layers
+# Configuration to reduce hidden layers for large models
+# Full models are too large, so we test with limited layers
examples/auto_deploy/model_registry/models.yaml (1)

119-121: Clarify the abbreviations in the disable comment.

The comment "NVFP4 quantization not supported for pre BLW - CW has only Hopper" uses abbreviations (BLW, CW) that may not be obvious to all contributors. Consider expanding them for better discoverability, e.g., "pre-Blackwell" and the CI/test cluster name.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35236 [ skip ] triggered by Bot. Commit: e21f7f4

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35236 [ skip ] completed with state SUCCESS. Commit: e21f7f4
Skipping testing for commit e21f7f4

@tcherckez-nvidia tcherckez-nvidia merged commit ea81a03 into NVIDIA:main Feb 9, 2026
8 of 9 checks passed
inciaf pushed a commit to inciaf/trtllm-energy-monitoring that referenced this pull request Feb 18, 2026
Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com>
Signed-off-by: Ahmet Inci <ainci@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants