Skip to content

[None][fix] Reduce host memory usage during model loading#11119

Merged
jthomson04 merged 3 commits intoNVIDIA:mainfrom
jthomson04:jthomson04/model-loading-mem-usage
Feb 5, 2026
Merged

[None][fix] Reduce host memory usage during model loading#11119
jthomson04 merged 3 commits intoNVIDIA:mainfrom
jthomson04:jthomson04/model-loading-mem-usage

Conversation

@jthomson04
Copy link
Collaborator

@jthomson04 jthomson04 commented Jan 29, 2026

Summary by CodeRabbit

Release Notes

  • Improvements
    • Enhanced memory management during model weight loading across all supported architectures, enabling more efficient memory usage patterns during initialization.
    • Strengthened weight loading validation and tracking mechanisms to prevent redundant loading operations and improve initialization reliability across multiple model types.

✏️ Tip: You can customize this high-level summary in your review settings.

TRTLLM doesn't delete the tensors in host memory after they've been copied to the GPU. This leads to massive host memory usage relative to the size of the checkpoint, leading to OOMs when loading large checkpoints.

Some testing on DSR1 FP4 (~350GB checkpoint)

Main branch: Peak at ~700GB
baseline

This branch: Peak at ~120GB
fixed

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@jthomson04
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34104 [ run ] triggered by Bot. Commit: dea32db

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34104 [ run ] completed with state SUCCESS. Commit: dea32db
/LLM/main/L0_MergeRequest_PR pipeline #26316 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@jthomson04 jthomson04 marked this pull request as ready for review January 29, 2026 19:40
@jthomson04 jthomson04 requested review from a team as code owners January 29, 2026 19:40
@jthomson04
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34111 [ run ] triggered by Bot. Commit: 9d73d84

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 29, 2026

📝 Walkthrough

Walkthrough

A new ConsumableWeightsDict wrapper class is introduced to manage weight dictionary memory during loading. The class and its mark_consumed() method are propagated across model weight-loader implementations to track which source weights have been consumed, enabling in-place deletion to free memory during the load process.

Changes

Cohort / File(s) Summary
Base Infrastructure
tensorrt_llm/_torch/models/checkpoints/base_weight_loader.py
Introduces ConsumableWeightsDict class wrapping a weights dictionary with mapping methods and mark_consumed(prefix) for freeing memory. Updates BaseWeightLoader.load_weights() return type to Union[Dict[str, Any], ConsumableWeightsDict] and updates docstrings accordingly.
Weight Loader Implementations
tensorrt_llm/_torch/models/checkpoints/hf/weight_loader.py
Adds import of ConsumableWeightsDict and updates _load_weights_in_parallel() return type from dict[str, Any] to ConsumableWeightsDict. Return statement now wraps weights in ConsumableWeightsDict(weights).
Model Weight Loaders
tensorrt_llm/_torch/models/modeling_deepseekv3.py, tensorrt_llm/_torch/models/modeling_glm.py, tensorrt_llm/_torch/models/modeling_hunyuan_dense.py, tensorrt_llm/_torch/models/modeling_hunyuan_moe.py
Updates load_weights() signatures in weight-loader and model classes to accept ConsumableWeightsDict instead of Dict. Adds mark_consumed() calls after loading specific weight groups (q_proj/k_proj/v_proj, experts, module weights) to track consumed source weights and enable memory cleanup.
Weight Loading Utilities
tensorrt_llm/_torch/models/modeling_utils.py
Extends _load_weights_impl_v2() with mark_consumed() calls in both special-handling and general branches to track consumption of mapped weights at the module level and for per-source mappings.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description lacks required template sections (Description, Test Coverage details) and does not provide an explanation of the solution, which are mandatory per the template. Complete the 'Description' section explaining the problem and solution, and fill 'Test Coverage' section documenting tests safeguarding these changes.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: reducing host memory usage during model loading, which is the primary objective of the PR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/models/checkpoints/hf/weight_loader.py (1)

29-30: Return type annotation inconsistent with actual return value.

The load_weights method returns the result of _load_weights_in_parallel, which now returns ConsumableWeightsDict. The type annotation should be updated to match.

🔧 Suggested fix
     def load_weights(self, checkpoint_dir: str,
-                     mapping: Mapping) -> dict[str, Any]:
+                     mapping: Mapping) -> ConsumableWeightsDict:
🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/models/modeling_hunyuan_moe.py`:
- Line 344: Remove the temporary debug print that logs model_config ("---debug
model_config: ") from tensorrt_llm/_torch/models/modeling_hunyuan_moe.py; locate
the print statement (the one producing "---debug model_config: " at or near the
model initialization code) and delete it so no debug output is emitted in
production logs.

In `@tensorrt_llm/_torch/models/modeling_utils.py`:
- Around line 1047-1048: The call to weights.mark_consumed(name) can raise
AttributeError if the weights object lacks that method; guard the call by
checking for the attribute or callable before invoking it (e.g., use
hasattr(weights, "mark_consumed") or callable(getattr(weights, "mark_consumed",
None)) and only call weights.mark_consumed(name) when present), or alternatively
wrap the call in a try/except AttributeError to no-op on missing method; update
the code path that invokes weights.mark_consumed(name) accordingly.
- Around line 1011-1015: The code calls weights.mark_consumed(...) but the
parameter is typed as Dict so passing a plain dict will raise AttributeError;
change the type annotation from Dict to the proper ConsumableWeightsDict (or a
Protocol exposing mark_consumed) and add a runtime guard before calling
mark_consumed: if not hasattr(weights, "mark_consumed"): wrap the marking logic
to either no-op or convert the dict to a ConsumableWeightsDict; also avoid using
the private attribute weight_mapper._mapping by exposing and calling a public
accessor (e.g., weight_mapper.get_mapping(module_name) or weight_mapper.mapping)
and iterate that result (referenced symbols: weights.mark_consumed, weights type
annotation, ConsumableWeightsDict, weight_mapper._mapping, module_name).
🧹 Nitpick comments (3)
tensorrt_llm/_torch/models/checkpoints/base_weight_loader.py (1)

44-45: Return type annotation is imprecise.

The items() method returns self._weights.items() which is a dict_items view (an ItemsView), not an Iterator. While this works at runtime due to duck typing, the type hint is technically incorrect.

🔧 Suggested fix
-    def items(self) -> Iterator[Tuple[str, Any]]:
+    def items(self):
         return self._weights.items()

Or for full precision:

from typing import ItemsView
def items(self) -> ItemsView[str, Any]:
    return self._weights.items()
tensorrt_llm/_torch/models/modeling_deepseekv3.py (1)

146-148: Mutable default argument is a known Python pitfall.

Using [] as a default argument is a Python anti-pattern because the list is created once at function definition time and shared across all calls. While this specific usage appears safe (the list is only read, not mutated), it's better practice to use None and initialize inside the function.

🔧 Suggested fix
     def load_weights(self,
                      weights: ConsumableWeightsDict,
-                     skip_modules: List[str] = []):
+                     skip_modules: Optional[List[str]] = None):
+        if skip_modules is None:
+            skip_modules = []
tensorrt_llm/_torch/models/modeling_glm.py (1)

120-122: Consider using list unpacking for cleaner syntax.

Per Ruff's suggestion, using [*names[:-1], src_name] is more idiomatic Python than list concatenation.

♻️ Suggested refactor
                     # Mark consumed source weights (e.g., q_proj, k_proj, v_proj)
                     for src_name in params_map[names[-1]]:
-                        weights.mark_consumed(".".join(names[:-1] + [src_name]))
+                        weights.mark_consumed(".".join([*names[:-1], src_name]))

@jthomson04 jthomson04 force-pushed the jthomson04/model-loading-mem-usage branch from 9d73d84 to a12d953 Compare January 29, 2026 21:30
@jthomson04
Copy link
Collaborator Author

/bot kill

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34124 [ kill ] triggered by Bot. Commit: a88d4a8

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34124 [ kill ] completed with state SUCCESS. Commit: a88d4a8
Successfully killed previous jobs for commit a88d4a8

@jthomson04
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34126 [ run ] triggered by Bot. Commit: a88d4a8

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34126 [ run ] completed with state SUCCESS. Commit: a88d4a8
/LLM/main/L0_MergeRequest_PR pipeline #26332 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Copy link
Collaborator

@yechank-nvidia yechank-nvidia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the work!
One question is that does this method support deepseekv3, glm & hunyuan models? Or can it be applied to other models?

@jthomson04
Copy link
Collaborator Author

Thanks for the work! One question is that does this method support deepseekv3, glm & hunyuan models? Or can it be applied to other models?

Hey @yechank-nvidia. That method is applicable to all models. I'll apply a similar fix to load_weights_impl as well.

@jthomson04 jthomson04 force-pushed the jthomson04/model-loading-mem-usage branch from d2f492a to 515e276 Compare February 3, 2026 00:41
@jthomson04
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34524 [ run ] triggered by Bot. Commit: 515e276

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34524 [ run ] completed with state SUCCESS. Commit: 515e276
/LLM/main/L0_MergeRequest_PR pipeline #26640 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@jthomson04 jthomson04 force-pushed the jthomson04/model-loading-mem-usage branch from 515e276 to 96965cc Compare February 3, 2026 23:44
@jthomson04
Copy link
Collaborator Author

/bot run --disable-fail-fast

@jthomson04 jthomson04 force-pushed the jthomson04/model-loading-mem-usage branch from 96965cc to c50e5c6 Compare February 4, 2026 00:27
@jthomson04
Copy link
Collaborator Author

/bot run --disable-fail-fast

Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
@jthomson04 jthomson04 force-pushed the jthomson04/model-loading-mem-usage branch from c50e5c6 to 93fef30 Compare February 4, 2026 19:17
@jthomson04
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34838 [ run ] triggered by Bot. Commit: 93fef30

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34838 [ run ] completed with state SUCCESS. Commit: 93fef30
/LLM/main/L0_MergeRequest_PR pipeline #26876 completed with status: 'SUCCESS'

Copy link
Collaborator

@yechank-nvidia yechank-nvidia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jthomson04 jthomson04 merged commit d778b26 into NVIDIA:main Feb 5, 2026
5 checks passed
SchumiDing pushed a commit to SchumiDing/TensorRT-LLM that referenced this pull request Feb 6, 2026
Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
@coderabbitai coderabbitai bot mentioned this pull request Feb 6, 2026
1 task
inciaf pushed a commit to inciaf/trtllm-energy-monitoring that referenced this pull request Feb 18, 2026
Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
Signed-off-by: Ahmet Inci <ainci@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants