Skip to content

Comments

[None][infra] AutoDeploy: Dump graph IR after every transform#11045

Merged
bmarimuthu-nv merged 3 commits intoNVIDIA:mainfrom
nv-auto-deploy:bala/ad-dump-debug-ir
Feb 9, 2026
Merged

[None][infra] AutoDeploy: Dump graph IR after every transform#11045
bmarimuthu-nv merged 3 commits intoNVIDIA:mainfrom
nv-auto-deploy:bala/ad-dump-debug-ir

Conversation

@bmarimuthu-nv
Copy link
Collaborator

@bmarimuthu-nv bmarimuthu-nv commented Jan 27, 2026

Summary by CodeRabbit

  • New Features
    • Added comprehensive graph dumping and introspection capabilities with shape and data type metadata to enhance model visualization during transformations
    • Enabled automated graph exports at each transform stage for improved debugging and analysis
    • Enhanced node naming to encode module hierarchy information for clearer graph representation and troubleshooting

Description

It is easy to debug with IRs. It also helps cursor agent to analyze faster.

This PR:

  • renames ops in the graph to have their hierarchy info in the name - example: model_layers_0_self_attn_kv_b_proj_torch_linear_simple_3
  • optionally dumps IR after every transform.
    • the IR is SSA style with following structure:
      %value_name = op_name(<%arg1 : shape : dtype>, <%arg2 : shape : dtype>, ... ) : (output_shape1, output_shape1) : (output_dtype1, output_dtype2)
      

Usage:

AD_DUMP_GRAPHS_DIR=<path to folder> python examples/auto_deploy/build_and_run_ad.py --yaml-extra <yaml>

Test Coverage

Tested locally

Example dumped folder:

bmarimuthu@cw-dfw-cs-001-vscode-01:TensorRT-LLM$ ls -l ad_graphs/
total 6775
-rw-rw-r-- 1 bmarimuthu dip 256053 Jan 27 15:38 001_export_export_to_gm.txt
-rw-rw-r-- 1 bmarimuthu dip 255500 Jan 27 15:38 002_post_export_cleanup_noop_slice.txt
-rw-rw-r-- 1 bmarimuthu dip 255498 Jan 27 15:38 003_post_export_cleanup_noop_add.txt
-rw-rw-r-- 1 bmarimuthu dip 255507 Jan 27 15:38 004_post_export_cleanup_input_constraints.txt
-rw-rw-r-- 1 bmarimuthu dip 255503 Jan 27 15:38 005_pattern_matcher_match_moe_pattern.txt
-rw-rw-r-- 1 bmarimuthu dip 255509 Jan 27 15:38 006_pattern_matcher_match_dense_moe_pattern.txt
-rw-rw-r-- 1 bmarimuthu dip 255507 Jan 27 15:38 007_pattern_matcher_match_bmm_moe_pattern.txt
-rw-rw-r-- 1 bmarimuthu dip 255501 Jan 27 15:38 008_pattern_matcher_match_repeat_kv.txt
-rw-rw-r-- 1 bmarimuthu dip 255507 Jan 27 15:38 009_pattern_matcher_match_eager_attention.txt
...
...

Example output in IR:

# Transform: export_to_gm
# Stage: export

%input_ids : s44xs70 : torch.int32
%position_ids : s44xs70 : torch.int64
%_empty_nn_module_stack_from_metadata_hook_sym_size_int_238 = aten.sym_size.int(%input_ids : s44xs70 : torch.int32, 1) : ? : SymInt
%_empty_nn_module_stack_from_metadata_hook_sym_size_int_239 = aten.sym_size.int(%position_ids : s44xs70 : torch.int64, 0) : ? : SymInt
%model_embed_tokens_embedding = aten.embedding.default(%model_embed_tokens_weight : 154880x2048 : torch.bfloat16, %input_ids : s44xs70 : torch.int32, 154820) : s44xs70x2048 : torch.bfloat16
%model_rotary_emb_unsqueeze = aten.unsqueeze.default(%model_rotary_emb_inv_freq : 32 : torch.float32, 0) : 1x32 : torch.float32
%model_rotary_emb_unsqueeze_1 = aten.unsqueeze.default(%model_rotary_emb_unsqueeze : 1x32 : torch.float32, 2) : 1x32x1 : torch.float32
%model_rotary_emb_to = aten.to.dtype(%model_rotary_emb_unsqueeze_1 : 1x32x1 : torch.float32, torch.float32) : 1x32x1 : torch.float32
%model_rotary_emb_expand = aten.expand.default(%model_rotary_emb_to : 1x32x1 : torch.float32, [_empty_nn_module_stack_from_metadata_hook_sym_size_int_239, -1, 1]) : s44x32x1 : torch.float32
%model_rotary_emb_to_1 = aten.to.dtype_layout(%model_rotary_emb_expand : s44x32x1 : torch.float32) : s44x32x1 : torch.float32
%model_rotary_emb_slice_1 = aten.slice.Tensor(%position_ids : s44xs70 : torch.int64, 0, 0, 9223372036854775807) : s44xs70 : torch.int64
%model_rotary_emb_unsqueeze_2 = aten.unsqueeze.default(%model_rotary_emb_slice_1 : s44xs70 : torch.int64, 1) : s44x1xs70 : torch.int64
%model_rotary_emb_slice_2 = aten.slice.Tensor(%model_rotary_emb_unsqueeze_2 : s44x1xs70 : torch.int64, 2, 0, 9223372036854775807) : s44x1xs70 : torch.int64
%model_rotary_emb_to_2 = aten.to.dtype(%model_rotary_emb_slice_2 : s44x1xs70 : torch.int64, torch.float32) : s44x1xs70 : torch.float32
%model_rotary_emb_to_3 = aten.to.dtype(%model_rotary_emb_to_1 : s44x32x1 : torch.float32, torch.float32) : s44x32x1 : torch.float32
%model_rotary_emb_to_4 = aten.to.dtype(%model_rotary_emb_to_2 : s44x1xs70 : torch.float32, torch.float32) : s44x1xs70 : torch.float32
%model_rotary_emb_matmul = aten.matmul.default(%model_rotary_emb_to_3 : s44x32x1 : torch.float32, %model_rotary_emb_to_4 : s44x1xs70 : torch.float32) : s44x32xs70 : torch.float32
%model_rotary_emb_transpose = aten.transpose.int(%model_rotary_emb_matmul : s44x32xs70 : torch.float32, 1, 2) : s44xs70x32 : torch.float32
%model_rotary_emb_cat = aten.cat.default([model_rotary_emb_transpose, model_rotary_emb_transpose], -1) : s44xs70x64 : torch.float32
%model_rotary_emb_cos = aten.cos.default(%model_rotary_emb_cat : s44xs70x64 : torch.float32) : s44xs70x64 : torch.float32
%model_rotary_emb_mul = aten.mul.Tensor(%model_rotary_emb_cos : s44xs70x64 : torch.float32, 1.0) : s44xs70x64 : torch.float32
%model_rotary_emb_sin = aten.sin.default(%model_rotary_emb_cat : s44xs70x64 : torch.float32) : s44xs70x64 : torch.float32
%model_rotary_emb_mul_1 = aten.mul.Tensor(%model_rotary_emb_sin : s44xs70x64 : torch.float32, 1.0) : s44xs70x64 : torch.float32
%model_rotary_emb_to_5 = aten.to.dtype(%model_rotary_emb_mul : s44xs70x64 : torch.float32, torch.bfloat16) : s44xs70x64 : torch.bfloat16
%model_rotary_emb_to_6 = aten.to.dtype(%model_rotary_emb_mul_1 : s44xs70x64 : torch.float32, torch.bfloat16) : s44xs70x64 : torch.bfloat16
%model_layers_0_input_layernorm_to_7 = aten.to.dtype(%model_embed_tokens_embedding : s44xs70x2048 : torch.bfloat16, torch.float32) : s44xs70x2048 : torch.float32
%model_layers_0_input_layernorm_pow_1 = aten.pow.Tensor_Scalar(%model_layers_0_input_layernorm_to_7 : s44xs70x2048 : torch.float32, 2) : s44xs70x2048 : torch.float32
%model_layers_0_input_layernorm_mean = aten.mean.dim(%model_layers_0_input_layernorm_pow_1 : s44xs70x2048 : torch.float32, [-1], True) : s44xs70x1 : torch.float32

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@bmarimuthu-nv
Copy link
Collaborator Author

/bot run

@bmarimuthu-nv bmarimuthu-nv changed the title [None][infra][AutoDeploy] Dump graph IR after every transform [None][infra]AutoDeploy: Dump graph IR after every transform Jan 27, 2026
@bmarimuthu-nv bmarimuthu-nv changed the title [None][infra]AutoDeploy: Dump graph IR after every transform [None] [infra]AutoDeploy: Dump graph IR after every transform Jan 27, 2026
@bmarimuthu-nv bmarimuthu-nv changed the title [None] [infra]AutoDeploy: Dump graph IR after every transform [None][infra] AutoDeploy: Dump graph IR after every transform Jan 27, 2026
@tensorrt-cicd
Copy link
Collaborator

PR_Github #33784 [ run ] triggered by Bot. Commit: e93fae1

@bmarimuthu-nv bmarimuthu-nv force-pushed the bala/ad-dump-debug-ir branch 2 times, most recently from e4123c0 to 850e40b Compare January 27, 2026 23:41
@bmarimuthu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33786 [ run ] triggered by Bot. Commit: 850e40b

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33786 [ run ] completed with state SUCCESS. Commit: 850e40b
/LLM/main/L0_MergeRequest_PR pipeline #26056 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bmarimuthu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33828 [ run ] triggered by Bot. Commit: 850e40b

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33828 [ run ] completed with state SUCCESS. Commit: 850e40b
/LLM/main/L0_MergeRequest_PR pipeline #26087 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@taylor-yb-lee
Copy link
Collaborator

Looks good to me!

@bmarimuthu-nv bmarimuthu-nv force-pushed the bala/ad-dump-debug-ir branch 3 times, most recently from d59add0 to d572fb6 Compare January 28, 2026 17:43
@bmarimuthu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33905 [ run ] triggered by Bot. Commit: d572fb6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33905 [ run ] completed with state SUCCESS. Commit: d572fb6
/LLM/main/L0_MergeRequest_PR pipeline #26148 completed with status: 'SUCCESS'

Copy link
Collaborator

@galagam galagam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, tried it out today on my debug :)

Copy link
Member

@lucaslie lucaslie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

feel free to merge it after addressing my comments

@bmarimuthu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34540 [ run ] triggered by Bot. Commit: f82e817

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34540 [ run ] completed with state SUCCESS. Commit: f82e817
/LLM/main/L0_MergeRequest_PR pipeline #26651 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bmarimuthu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34598 [ run ] triggered by Bot. Commit: f82e817

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34598 [ run ] completed with state FAILURE. Commit: f82e817
/LLM/main/L0_MergeRequest_PR pipeline #26697 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bmarimuthu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34668 [ run ] triggered by Bot. Commit: 24ac5e3

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34668 [ run ] completed with state FAILURE. Commit: 24ac5e3
/LLM/main/L0_MergeRequest_PR pipeline #26751 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bmarimuthu-nv
Copy link
Collaborator Author

/bot run

Signed-off-by: Balamurugan Marimuthu <246387390+bmarimuthu-nv@users.noreply.github.com>
Signed-off-by: Balamurugan Marimuthu <246387390+bmarimuthu-nv@users.noreply.github.com>
Signed-off-by: Balamurugan Marimuthu <246387390+bmarimuthu-nv@users.noreply.github.com>
@bmarimuthu-nv
Copy link
Collaborator Author

/bot run

@bmarimuthu-nv bmarimuthu-nv marked this pull request as ready for review February 6, 2026 18:44
@bmarimuthu-nv bmarimuthu-nv requested a review from a team as a code owner February 6, 2026 18:44
@bmarimuthu-nv bmarimuthu-nv enabled auto-merge (squash) February 6, 2026 18:44
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 6, 2026

📝 Walkthrough

Walkthrough

Adds graph debugging infrastructure to the TensorRT LLM auto-deploy pipeline. Node renaming in the export flow encodes module hierarchy into graph node names, while a new GraphWriter utility provides SSA-style graph serialization with shape and dtype metadata. Transform pipeline hooks dump resulting graphs automatically when enabled via environment variable.

Changes

Cohort / File(s) Summary
Graph Export Enhancement
tensorrt_llm/_torch/auto_deploy/export/export.py
Added _rename_nodes_with_module_hierarchy() helper to rename call_function nodes with module-path prefixes derived from nn_module_stack. Integrated renaming into both _capture_fn and torch_export_to_gm paths post-cleanup. Added re import.
Transform Pipeline Integration
tensorrt_llm/_torch/auto_deploy/transform/interface.py
Imported graph_writer and added automatic graph dumping via graph_writer.dump_graph() after transformation completion, enabling per-stage debugging output.
Graph Serialization Utility
tensorrt_llm/_torch/auto_deploy/utils/graph_writer.py
New module implementing SSA-style graph dumping with helper functions for dtype/shape extraction. GraphWriter singleton manages timestamped dump files, recursively serializes all GraphModule instances with per-node metadata, guards non-main processes, and respects AD_DUMP_GRAPHS_DIR environment variable.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: adding graph IR dumping functionality after every transform in AutoDeploy, matching the primary objective.
Description check ✅ Passed The description covers the what (renaming ops, dumping IR), why (easier debugging, helps cursor agent), usage (environment variable), and testing (tested locally with examples). Follows the template structure appropriately.
Docstring Coverage ✅ Passed Docstring coverage is 81.82% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/auto_deploy/utils/graph_writer.py`:
- Around line 1-11: Add the required NVIDIA copyright header (with the current
year) to the top of the new module graph_writer.py: insert the standard
multi-line copyright header comment before the imports in
tensorrt_llm/_torch/auto_deploy/utils/graph_writer.py so all source files
include the NVIDIA copyright and year of latest modification, preserving
existing imports and encoding.
🧹 Nitpick comments (6)
tensorrt_llm/_torch/auto_deploy/transform/interface.py (1)

491-492: Consider wrapping dump_graph in a try-except to prevent debug infrastructure from breaking the pipeline.

If AD_DUMP_GRAPHS_DIR is set and a file-system error occurs (e.g., permission denied, disk full), this will propagate an unhandled exception and abort the transform pipeline. A debug/diagnostic feature should not crash the main flow.

Proposed safeguard
         # Dump graph after transform for debugging (controlled by AD_DUMP_GRAPHS_DIR env var)
-        graph_writer.dump_graph(mod, t_name, self.config.stage.value)
+        try:
+            graph_writer.dump_graph(mod, t_name, self.config.stage.value)
+        except Exception as e:
+            ad_logger.warning(f"Failed to dump graph for {t_name}: {e}")
tensorrt_llm/_torch/auto_deploy/export/export.py (1)

218-225: Op-name extraction fallback could produce noisy names.

The str(target).split(".")[-1] fallback (Line 225) for targets without __name__ or _name may yield unexpected fragments for complex target representations (e.g., OpOverloadPacket objects). This is a minor concern since most call_function targets will have __name__, but worth a note.

tensorrt_llm/_torch/auto_deploy/utils/graph_writer.py (4)

38-91: Missing docstring for dump_ssa_with_meta, and node.kwargs are not emitted.

  1. dump_ssa_with_meta is a public function (no leading underscore) but lacks a docstring. Per guidelines, prefer docstrings for interfaces that may be used outside a file.

  2. The SSA dump iterates only over node.args (Line 50) but ignores node.kwargs, which can carry important information like dtype, device, memory_format, etc. This reduces the debugging value of the IR output.

  3. get_attr nodes are silently skipped, so parameter/buffer references will appear as undefined names in the IR. A brief comment or a placeholder line would improve readability of dumped files.

Suggested kwargs handling (illustrative)
             for arg in node.args:
                 if hasattr(arg, "name"):
                     # Look up the arg node's metadata for shape/dtype
                     if hasattr(arg, "meta") and "val" in arg.meta:
                         arg_shape_dtype = _get_shape_dtype_str(arg.meta["val"])
                         input_vars.append(f"%{arg.name} : {arg_shape_dtype}")
                     else:
                         input_vars.append(f"%{arg.name} : ? : unknown")
                 else:
                     input_vars.append(str(arg))
+
+            # Include keyword arguments
+            for key, val in node.kwargs.items():
+                if hasattr(val, "name"):
+                    input_vars.append(f"{key}=%{val.name}")
+                else:
+                    input_vars.append(f"{key}={val}")

112-119: shutil.rmtree on existing directory is destructive — consider adding a safety check.

If a user accidentally sets AD_DUMP_GRAPHS_DIR to an important path, Line 116 will recursively delete its entire contents. Consider either:

  • Only deleting files matching the expected *.txt pattern instead of rmtree.
  • Adding a sentinel file (e.g., .ad_graph_dump) on first creation, and only clearing the directory if the sentinel exists.
Sentinel-based approach
         if not self._dump_dir_initialized:
             dump_dir_path = Path(self._dump_dir)
+            sentinel = dump_dir_path / ".ad_graph_dump"
             if dump_dir_path.exists():
-                shutil.rmtree(dump_dir_path)
-            dump_dir_path.mkdir(parents=True, exist_ok=True)
+                if sentinel.exists():
+                    shutil.rmtree(dump_dir_path)
+                else:
+                    self._logger.warning(
+                        f"Directory {self._dump_dir} exists but was not created by "
+                        "GraphWriter. Skipping cleanup."
+                    )
+            dump_dir_path.mkdir(parents=True, exist_ok=True)
+            sentinel.touch()

22-28: _get_shape_str may raise on non-concrete symbolic dims.

Line 26: int(d) is called when str(d).isdigit() is true, which is safe for regular integers. However, if a symbolic dimension's string representation happens to be numeric (e.g., a guarded SymInt), calling int(d) would be redundant — str(d) already gives the digit string. You can simplify this to just str(d) for all cases:

-        dims = [str(int(d)) if str(d).isdigit() else str(d) for d in val.shape]
+        dims = [str(d) for d in val.shape]

94-146: Singleton instantiated at module import time reads env var eagerly.

graph_writer = GraphWriter() on Line 146 reads AD_DUMP_GRAPHS_DIR once during import. If the env var is set after the module is imported (e.g., in test fixtures), the writer won't pick it up. This is acceptable for CLI usage but worth documenting.

Also, the __init__ signature accepts no parameters, so there's no way to override the dump directory programmatically (e.g., in tests). Consider exposing a configure(dump_dir: str) method or reading the env var lazily in dump_graph.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35143 [ run ] triggered by Bot. Commit: 16c3580

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35143 [ run ] completed with state SUCCESS. Commit: 16c3580
/LLM/main/L0_MergeRequest_PR pipeline #27134 completed with status: 'SUCCESS'

@bmarimuthu-nv bmarimuthu-nv merged commit 4a74333 into NVIDIA:main Feb 9, 2026
7 checks passed
inciaf pushed a commit to inciaf/trtllm-energy-monitoring that referenced this pull request Feb 18, 2026
…#11045)

Signed-off-by: Balamurugan Marimuthu <246387390+bmarimuthu-nv@users.noreply.github.com>
Signed-off-by: Ahmet Inci <ainci@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants