Skip to content

Comments

[TRTLLM-10305][feat] Support customized seq len larger than model config#10600

Merged
Wanli-Jiang merged 1 commit intoNVIDIA:mainfrom
Wanli-Jiang:user/williamj/extend-user-seq-len
Jan 16, 2026
Merged

[TRTLLM-10305][feat] Support customized seq len larger than model config#10600
Wanli-Jiang merged 1 commit intoNVIDIA:mainfrom
Wanli-Jiang:user/williamj/extend-user-seq-len

Conversation

@Wanli-Jiang
Copy link
Collaborator

@Wanli-Jiang Wanli-Jiang commented Jan 12, 2026

Goal:

  • Support user customized seq_len for specific model (like Nemotron v3 Nano)

How:

  • Introduce an env variable TLLM_ALLOW_LONG_MAX_MODEL_LEN, and forcely set max_seq_len and max_num_tokens if TLLM_ALLOW_LONG_MAX_MODEL_LEN is set.

Examples:

TLLM_ALLOW_LONG_MAX_MODEL_LEN=1 trtllm-serve  <model>  --backend pytorch \
--trust_remote_code \
--max_seq_len 131072 --max_num_tokens 131072

Summary by CodeRabbit

  • New Features
    • Added environment variable support to override maximum sequence length configuration. Users can now set TLLM_ALLOW_LONG_MAX_MODEL_LEN to customize max sequence length limits with appropriate warning messages.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Wanli-Jiang Wanli-Jiang requested a review from a team as a code owner January 12, 2026 08:38
@Wanli-Jiang Wanli-Jiang requested a review from achartier January 12, 2026 08:38
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 12, 2026

📝 Walkthrough

Walkthrough

Adds an environment-based override mechanism for max_seq_len in the model engine initialization via the TLLM_ALLOW_LONG_MAX_MODEL_LEN flag. When set, the code logs a warning and allows using a user-specified larger value, while preserving existing behavior for other scenarios.

Changes

Cohort / File(s) Summary
Environment-based max_seq_len override
tensorrt_llm/_torch/pyexecutor/model_engine.py
Added conditional logic in _init_max_seq_len() to check TLLM_ALLOW_LONG_MAX_MODEL_LEN environment variable; logs warning when flag is set to permit user-specified larger max_seq_len values; preserves existing behavior for standard inference scenarios.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning The PR description includes the goal, implementation approach, and usage example, but lacks critical required sections like detailed description, test coverage details, and incomplete PR checklist. Fill in the Description section explaining the issue and solution. Provide specific test coverage details in the Test Coverage section. Ensure all PR checklist items are properly addressed and marked.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: adding support for customized sequence lengths larger than model configuration, which aligns with the code modifications that introduce an environment variable override.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)

1158-1178: Warning message may be misleading when user's max_seq_len does not exceed the inferred limit.

The elif user_config_max_seq_len branch (line 1174) triggers whenever the environment variable is set, regardless of whether self.max_seq_len > inferred_max_seq_len. This means the warning claiming "User specified max_seq_len is larger than the config" may be logged even when the user's value is smaller than or equal to the inferred limit.

Consider combining the conditions to ensure the warning is accurate:

♻️ Suggested fix
-        elif user_config_max_seq_len:
+        elif user_config_max_seq_len and inferred_max_seq_len < self.max_seq_len:
             logger.warning(
                 f"User specified max_seq_len is larger than the config in the model config file "
                 f"({inferred_max_seq_len}). Setting max_seq_len to user's specified value {self.max_seq_len}. "
             )

Additionally, the variable name user_config_max_seq_len is a boolean but reads like a length value. A name like allow_long_max_seq_len or override_max_seq_len_limit would better convey its purpose.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3bd319d and b702c87.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing Python modules, even if only one class or function from a module is used
Python filenames should use snake_case (e.g., some_file.py)
Python classes should use PascalCase (e.g., class SomeClass)
Python functions and methods should use snake_case (e.g., def my_awesome_function():)
Python local variables should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL)
Python constants should use upper snake_case (e.g., MY_CONSTANT)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Use comments in Python for code within a function, or interfaces that are local to a file
Use Google-style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with the format """<type>: Description"""
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block for the main logic

Files:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification

Files:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py
🧠 Learnings (5)
📓 Common learnings
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:577-579
Timestamp: 2025-08-20T06:56:02.889Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, maxSequenceLength is now enforced as a non-optional argument in the BlockManager constructor, so concerns about std::nullopt defaulting to 0 are not applicable. When windowSize > maxSequenceLength, a warning should be added instead of handling optional parameter cases.
Learnt from: samuellees
Repo: NVIDIA/TensorRT-LLM PR: 6974
File: tensorrt_llm/serve/scripts/benchmark_dataset.py:558-566
Timestamp: 2025-08-18T08:42:02.640Z
Learning: In TensorRT-LLM's RandomDataset (tensorrt_llm/serve/scripts/benchmark_dataset.py), when using --random-token-ids option, sequence length accuracy is prioritized over semantic correctness for benchmarking purposes. The encode/decode operations should use skip_special_tokens=True and add_special_tokens=False to ensure exact target token lengths.
📚 Learning: 2025-08-20T06:56:02.889Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:577-579
Timestamp: 2025-08-20T06:56:02.889Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, maxSequenceLength is now enforced as a non-optional argument in the BlockManager constructor, so concerns about std::nullopt defaulting to 0 are not applicable. When windowSize > maxSequenceLength, a warning should be added instead of handling optional parameter cases.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py
📚 Learning: 2025-08-26T06:07:02.166Z
Learnt from: shaharmor98
Repo: NVIDIA/TensorRT-LLM PR: 7231
File: tensorrt_llm/_torch/pyexecutor/_util.py:504-509
Timestamp: 2025-08-26T06:07:02.166Z
Learning: In tensorrt_llm/_torch/pyexecutor/_util.py, when calling model_engine.set_lora_model_config(), pass model_binding_config.mlp_hidden_size directly without multiplying by mapping.tp_size, as the mlp_hidden_size from get_bindings_model_config() is already the per-TP rank value needed for LoRA weight packaging.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py
📚 Learning: 2025-12-12T03:27:08.565Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 9655
File: tensorrt_llm/_torch/pyexecutor/sampler.py:3031-3031
Timestamp: 2025-12-12T03:27:08.565Z
Learning: In files under tensorrt_llm/_torch/pyexecutor, avoid accessing torch.Tensor objects inside for-loops when iterating over requests. Convert batched tensors to Python lists beforehand using tensor.tolist(), and then iterate over those lists. This improves performance by reducing tensor-bound operations inside hot loops. Apply this pattern to similar code paths that process batches to access simple Python data structures (lists) inside loops.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)
tensorrt_llm/_torch/attention_backend/trtllm.py (2)
  • max_seq_len (688-698)
  • max_seq_len (701-705)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@Wanli-Jiang Wanli-Jiang force-pushed the user/williamj/extend-user-seq-len branch 2 times, most recently from ca79ec4 to f283a32 Compare January 13, 2026 04:52
@Wanli-Jiang
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31690 [ run ] triggered by Bot. Commit: f283a32

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31690 [ run ] completed with state SUCCESS. Commit: f283a32
/LLM/main/L0_MergeRequest_PR pipeline #24516 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@Wanli-Jiang
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31846 [ run ] triggered by Bot. Commit: f283a32

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31846 [ run ] completed with state SUCCESS. Commit: f283a32
/LLM/main/L0_MergeRequest_PR pipeline #24659 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@Wanli-Jiang
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31888 [ run ] triggered by Bot. Commit: f283a32

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31888 [ run ] completed with state SUCCESS. Commit: f283a32
/LLM/main/L0_MergeRequest_PR pipeline #24692 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@Wanli-Jiang Wanli-Jiang force-pushed the user/williamj/extend-user-seq-len branch from f283a32 to fbc6392 Compare January 14, 2026 13:07
@Wanli-Jiang
Copy link
Collaborator Author

/bot run --only-multi-gpu-test --disable-fail-fast

1 similar comment
@Wanli-Jiang
Copy link
Collaborator Author

/bot run --only-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32044 [ run ] triggered by Bot. Commit: fbc6392

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32044 [ run ] completed with state SUCCESS. Commit: fbc6392
/LLM/main/L0_MergeRequest_PR pipeline #24834 (Partly Tested) completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
@Wanli-Jiang Wanli-Jiang force-pushed the user/williamj/extend-user-seq-len branch from fbc6392 to 8585914 Compare January 15, 2026 09:28
@Wanli-Jiang
Copy link
Collaborator Author

/bot run --only-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32118 [ run ] triggered by Bot. Commit: 8585914

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32118 [ run ] completed with state SUCCESS. Commit: 8585914
/LLM/main/L0_MergeRequest_PR pipeline #24897 (Partly Tested) completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@Wanli-Jiang
Copy link
Collaborator Author

/bot skip --comment "single gpu passed, multi gpu passed with different runs"

@Wanli-Jiang Wanli-Jiang enabled auto-merge (squash) January 16, 2026 05:18
@Wanli-Jiang
Copy link
Collaborator Author

/bot skip --comment "single gpu passed, multi gpu passed with different runs"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32260 [ skip ] triggered by Bot. Commit: 8585914

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32260 [ skip ] completed with state SUCCESS. Commit: 8585914
Skipping testing for commit 8585914

@Wanli-Jiang Wanli-Jiang merged commit 722978b into NVIDIA:main Jan 16, 2026
5 checks passed
zheyuf pushed a commit to zheyuf/TensorRT-LLM that referenced this pull request Jan 29, 2026
…fig (NVIDIA#10600)

Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants