[https://nvbugs/5756028][fix] Fix VSWA initialization with spec-dec and boundary condition in context input preparation#10798
Conversation
|
/bot run --disable-fail-fast |
📝 WalkthroughWalkthroughA single-line change in the resource manager shifts KV-head count sourcing from model configuration to instance-level per-layer data in VSWA window size adjustment, and removes two previously waived test cases from the skip list. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/pyexecutor/resource_manager.py`:
- Around line 1011-1015: The method calculate_cache_size_per_token is defined as
a `@staticmethod` but references instance state (self.num_kv_heads_per_layer),
causing a NameError; remove the `@staticmethod` decorator and add self as the
first parameter (def calculate_cache_size_per_token(self, layers: Set[int]) ->
int:) so the method can access self.num_kv_heads_per_layer and other instance
attributes (kv_factor/model_config if they live on self) and then use
self.num_kv_heads_per_layer in the body to compute total_kv_heads.
|
PR_Github #32544 [ run ] triggered by Bot. Commit: |
b9ae79e to
9f6fac7
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #32547 [ run ] triggered by Bot. Commit: |
|
PR_Github #32547 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
/bot run --disable-fail-fast |
|
PR_Github #34724 [ run ] triggered by Bot. Commit: |
Signed-off-by: eopXD <yuehtingc@nvidia.com>
|
/bot run --disable-fail-fast |
|
PR_Github #34764 [ run ] triggered by Bot. Commit: |
|
PR_Github #34793 [ run ] triggered by Bot. Commit: |
|
PR_Github #34793 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #34859 [ run ] triggered by Bot. Commit: |
|
/bot run --disable-fail-fast |
|
PR_Github #34860 [ run ] triggered by Bot. Commit: |
…pec-dec layers and use Python function instead of going into C++ The calculation can be done in Python, it is also more convenient this way to access the configurations and the derived results based on them instead of trying to pack and pass them into the C++ function. Up-leveled some logs from debug to info as they are frequently used for triage on first sight. Signed-off-by: eopXD <yuehtingc@nvidia.com>
We can't get any context tokens from specified `num_contexts` if `num_ctx_tokens` is zero. This commit guards the operation with the condition. Signed-off-by: eopXD <yuehtingc@nvidia.com>
…`adjust_window_sizes_for_vswa` Also fix the test case setting to be clearer. Signed-off-by: eopXD <yuehtingc@nvidia.com>
|
PR_Github #34860 [ run ] completed with state
|
|
/bot run --disable-fail-fast --reuse-test |
|
PR_Github #34916 [ run ] triggered by Bot. Commit: |
|
PR_Github #34916 [ run ] completed with state |
…nd boundary condition in context input preparation (NVIDIA#10798) Signed-off-by: eopXD <yuehtingc@nvidia.com> Signed-off-by: Ahmet Inci <ainci@nvidia.com>
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.
Description
This MR solves multiple problems in the exposure of the bug of running Gemma3 under VSWA+spec-dec configuration.
num_ctx_tokenscould be 0, causing the range calculation to be malfunctioned. This MR also fixes it.Test Coverage
The test case mentioned under the bug is not fully resolved yet. There is still VSWA related faults on RMSNorm when encorporating with spec-dec. I will pass on the torch to @ziyixiong-nv to fix the problem there and re-enable the failing test case. So no test case is unwaived in this merge request.
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.