[None][fix] Reduce host memory usage during model loading#11119
[None][fix] Reduce host memory usage during model loading#11119jthomson04 merged 3 commits intoNVIDIA:mainfrom
Conversation
|
/bot run --disable-fail-fast |
|
PR_Github #34104 [ run ] triggered by Bot. Commit: |
|
PR_Github #34104 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #34111 [ run ] triggered by Bot. Commit: |
📝 WalkthroughWalkthroughA new Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/models/checkpoints/hf/weight_loader.py (1)
29-30: Return type annotation inconsistent with actual return value.The
load_weightsmethod returns the result of_load_weights_in_parallel, which now returnsConsumableWeightsDict. The type annotation should be updated to match.🔧 Suggested fix
def load_weights(self, checkpoint_dir: str, - mapping: Mapping) -> dict[str, Any]: + mapping: Mapping) -> ConsumableWeightsDict:
🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/models/modeling_hunyuan_moe.py`:
- Line 344: Remove the temporary debug print that logs model_config ("---debug
model_config: ") from tensorrt_llm/_torch/models/modeling_hunyuan_moe.py; locate
the print statement (the one producing "---debug model_config: " at or near the
model initialization code) and delete it so no debug output is emitted in
production logs.
In `@tensorrt_llm/_torch/models/modeling_utils.py`:
- Around line 1047-1048: The call to weights.mark_consumed(name) can raise
AttributeError if the weights object lacks that method; guard the call by
checking for the attribute or callable before invoking it (e.g., use
hasattr(weights, "mark_consumed") or callable(getattr(weights, "mark_consumed",
None)) and only call weights.mark_consumed(name) when present), or alternatively
wrap the call in a try/except AttributeError to no-op on missing method; update
the code path that invokes weights.mark_consumed(name) accordingly.
- Around line 1011-1015: The code calls weights.mark_consumed(...) but the
parameter is typed as Dict so passing a plain dict will raise AttributeError;
change the type annotation from Dict to the proper ConsumableWeightsDict (or a
Protocol exposing mark_consumed) and add a runtime guard before calling
mark_consumed: if not hasattr(weights, "mark_consumed"): wrap the marking logic
to either no-op or convert the dict to a ConsumableWeightsDict; also avoid using
the private attribute weight_mapper._mapping by exposing and calling a public
accessor (e.g., weight_mapper.get_mapping(module_name) or weight_mapper.mapping)
and iterate that result (referenced symbols: weights.mark_consumed, weights type
annotation, ConsumableWeightsDict, weight_mapper._mapping, module_name).
🧹 Nitpick comments (3)
tensorrt_llm/_torch/models/checkpoints/base_weight_loader.py (1)
44-45: Return type annotation is imprecise.The
items()method returnsself._weights.items()which is adict_itemsview (anItemsView), not anIterator. While this works at runtime due to duck typing, the type hint is technically incorrect.🔧 Suggested fix
- def items(self) -> Iterator[Tuple[str, Any]]: + def items(self): return self._weights.items()Or for full precision:
from typing import ItemsView def items(self) -> ItemsView[str, Any]: return self._weights.items()tensorrt_llm/_torch/models/modeling_deepseekv3.py (1)
146-148: Mutable default argument is a known Python pitfall.Using
[]as a default argument is a Python anti-pattern because the list is created once at function definition time and shared across all calls. While this specific usage appears safe (the list is only read, not mutated), it's better practice to useNoneand initialize inside the function.🔧 Suggested fix
def load_weights(self, weights: ConsumableWeightsDict, - skip_modules: List[str] = []): + skip_modules: Optional[List[str]] = None): + if skip_modules is None: + skip_modules = []tensorrt_llm/_torch/models/modeling_glm.py (1)
120-122: Consider using list unpacking for cleaner syntax.Per Ruff's suggestion, using
[*names[:-1], src_name]is more idiomatic Python than list concatenation.♻️ Suggested refactor
# Mark consumed source weights (e.g., q_proj, k_proj, v_proj) for src_name in params_map[names[-1]]: - weights.mark_consumed(".".join(names[:-1] + [src_name])) + weights.mark_consumed(".".join([*names[:-1], src_name]))
9d73d84 to
a12d953
Compare
|
/bot kill |
|
PR_Github #34124 [ kill ] triggered by Bot. Commit: |
|
PR_Github #34124 [ kill ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #34126 [ run ] triggered by Bot. Commit: |
|
PR_Github #34126 [ run ] completed with state
|
yechank-nvidia
left a comment
There was a problem hiding this comment.
Thanks for the work!
One question is that does this method support deepseekv3, glm & hunyuan models? Or can it be applied to other models?
Hey @yechank-nvidia. That method is applicable to all models. I'll apply a similar fix to |
d2f492a to
515e276
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #34524 [ run ] triggered by Bot. Commit: |
|
PR_Github #34524 [ run ] completed with state
|
515e276 to
96965cc
Compare
|
/bot run --disable-fail-fast |
96965cc to
c50e5c6
Compare
|
/bot run --disable-fail-fast |
Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
c50e5c6 to
93fef30
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #34838 [ run ] triggered by Bot. Commit: |
|
PR_Github #34838 [ run ] completed with state |
Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
Signed-off-by: jthomson04 <jwillthomson19@gmail.com> Signed-off-by: Ahmet Inci <ainci@nvidia.com>
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.
TRTLLM doesn't delete the tensors in host memory after they've been copied to the GPU. This leads to massive host memory usage relative to the size of the checkpoint, leading to OOMs when loading large checkpoints.
Some testing on DSR1 FP4 (~350GB checkpoint)
Main branch: Peak at ~700GB

This branch: Peak at ~120GB

Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.