[None][feat] Enable NCCL_SYMMETRIC as default fallback for AllReduce#9314
Conversation
📝 WalkthroughWalkthroughThis pull request refactors NCCL-related infrastructure across TensorRT-LLM, introducing centralized NCCL utilities for dynamic library loading, resource management, and window buffer pooling. The changes migrate all-reduce strategies from NCCL to NCCL_SYMMETRIC as the default symmetric variant, remove legacy NCCL allocators, and add PyTorch integration for NCCL window tensors with comprehensive test coverage. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant Alloc as NCCLWindowAllocator
participant Mgr as NcclCommResourceManager
participant NCCL as NCCL Library
participant CUDA as CUDA
App->>Alloc: requestBuffer(comm, size)
Alloc->>Alloc: searchBuffer() - reuse if available
alt Buffer found
Alloc->>App: return NCCLWindowBuffer
else No buffer
Alloc->>CUDA: allocate device memory
CUDA-->>Alloc: device ptr
Alloc->>NCCL: ncclCommWindowRegister(ptr, size)
NCCL-->>Alloc: ncclWindow_t
Alloc->>Mgr: registerResource(comm, cleanup_callback)
Mgr->>Mgr: store cleanup in per-comm list
Alloc->>App: return NCCLWindowBuffer
end
App->>App: use buffer for allreduce
App->>Alloc: releaseBuffer(comm, ptr)
Alloc->>Alloc: mark buffer inUse=false
Note over Mgr,NCCL: On comm destruction
Mgr->>Mgr: cleanupResources(comm)
loop for each registered cleanup
Mgr->>Alloc: invoke cleanup callbacks
Alloc->>Alloc: cleanupBuffersForComm(comm)
Alloc->>NCCL: deregister windows
Alloc->>CUDA: free device memory
end
sequenceDiagram
participant Python as Python API
participant Torch as PyTorch
participant WindowTensor as ncclWindowTensor
participant Alloc as NCCLWindowAllocator
participant Comm as NCCL Comm
Python->>WindowTensor: create_nccl_window_tensor(group, shape, dtype)
WindowTensor->>Comm: getComm(group)
Comm-->>WindowTensor: ncclComm_t
WindowTensor->>Alloc: requestBuffer(comm, buffer_size)
Alloc-->>WindowTensor: NCCLWindowBuffer
WindowTensor->>Torch: create tensor from buffer ptr
Torch-->>WindowTensor: torch::Tensor with custom_deleter
Note over Torch: custom_deleter calls releaseBuffer on destruction
WindowTensor->>Python: return tensor
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
This makes NCCL_SYMMETRIC default with some infra changes. The one thing, this does not yet include is the optimization that results are automatically written into NCCLWindowTensors, like the UB tensors. |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tensorrt_llm/functional.py (1)
3981-4040: Review comment is valid; fix prevents passingNonetensor to TensorRT pluginThe review comment accurately identifies a logic flow issue. When
current_all_reduce_helper().workspace is Noneat line 4096, the code setsstrategy = AllReduceStrategy.NCCL_SYMMETRIC(line 4097) but leavesworkspace = None(line 4094). SinceNCCL_SYMMETRICis a distinct enum value (notNCCLorUB), the condition at line 4025 increate_allreduce_plugin()evaluates to true, appending theNoneworkspace toplug_inputs. ThisNonewould then be passed tonetwork.add_plugin_v2(), which expectsITensorinstances.The suggested fix—checking
if workspace is not Nonebefore appending at line 4025–4026—is the correct approach and prevents this failure path.cpp/tensorrt_llm/thop/allreduceOp.cpp (1)
437-537: Threshold overflow issue confirmed for ≥32-rank clustersThe review comment is accurate. Verification confirms:
Code location and coefficients verified: Exact match at line 478 of
cpp/tensorrt_llm/thop/allreduceOp.cppwith coefficients a = −4986.43478503, b = 156716.52177552Mathematical issue is real: The threshold becomes negative at ~31.43 ranks. For nRanks ≥ 32, the calculation yields negative values (e.g., −2849.39 at 32 ranks, −162415.30 at 64 ranks)
No guards exist: Search for threshold validation found zero safeguards—no clamping, max(), or bounds checking
Impact: Casting a negative double to
size_twraps to an extremely large positive value. This causes the comparison at line 495 to always evaluate true, silently disabling buffer registration for all buffer sizes on clusters with ≥32 ranks and severely degrading performance in a non-obvious, rank-dependent mannerThe suggested fix appropriately clamps the threshold to 0.0 before casting, preserving the tuning curve where it's valid and preventing the underflow.
🧹 Nitpick comments (12)
cpp/tensorrt_llm/common/customAllReduceUtils.h (1)
63-85: Unreachable NCCL_SYMMETRIC fallback inSelectStrategyLP
return AllReduceStrategyType::NCCL_SYMMETRIC;at Line 84 is currently unreachable because both branches above return. If this is meant as a real fallback for future conditions (e.g., an explicit “NCCL zone”), consider either:
- Adding an explicit
else/guard that can actually reach this return, or- Dropping the line (or adding a brief comment) to avoid confusion about dead code.
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)
2746-2764: NCCL_SYMMETRIC UB gating is correct; consider simplifying caller logicThe changes in
_init_userbufferscorrectly ensure that:
- TP size ≤ 1 and unsupported platforms still early-exit, and
self.llm_args.allreduce_strategy == "NCCL_SYMMETRIC"returnsFalsebefore callingub.initialize_userbuffers_manager, so NCCL_SYMMETRIC no longer sets up UB and can rely solely onNCCLWindowAllocator.Two small cleanups to consider:
In
__init__,use_ub_for_nccl = (self.llm_args.allreduce_strategy == "NCCL_SYMMETRIC" and self._init_userbuffers(...))will now always beFalse, since_init_userbuffersreturnsFalsefor NCCL_SYMMETRIC. This makesuse_ub_for_nccleffectively dead logic and causes_init_userbuffersto be called twice for NCCL_SYMMETRIC when torch.compile UB is enabled. You could:
- Skip calling
_init_userbuffersentirely whenallreduce_strategy == "NCCL_SYMMETRIC"in the caller, and/or- Rename or remove
use_ub_for_ncclto better reflect the new semantics.
use_nccl_symmetricis now hard-coded toFalsefor UB initialization. If there is no remaining UB path that depends on this flag, consider dropping the parameter fromub.initialize_userbuffers_manager(or at least the local variable) to avoid suggesting configurable behavior that no longer exists.These are readability/maintainability nits; behavior for NCCL_SYMMETRIC and other strategies looks correct.
tests/microbenchmarks/all_reduce.py (1)
171-183: AddingNCCL_SYMMETRICto benchmark strategies looks consistentIncluding
AllReduceStrategy.NCCL_SYMMETRICin the benchmark grid aligns with the new fallback behavior and will help compare it fairly against the existing modes. You may optionally consider skipping it when NCCL symmetric support is not available (e.g., via a helper similar to other NCCL feature checks), but it’s fine to leave responsibility to the runtime if that’s the established pattern.cpp/tensorrt_llm/kernels/userbuffers/userbuffersManager.cpp (1)
17-49: Deprecation handling foruse_nccl_symmetricis clear and safeLogging a warning and otherwise ignoring
use_nccl_symmetrickeeps the API backward-compatible while reflecting the new NCCL_SYMMETRIC implementation that bypasses the userbuffer allocator. Longer term, you might consider removing this parameter from the public API once callers are migrated, but the current behavior is fine.cpp/tensorrt_llm/thop/ncclWindowTensor.h (1)
18-21: Drop unnecessaryncclUtils.hinclude from the header
ncclWindowTensor.honly needs Torch types for the function signature; it doesn’t use any symbols fromtensorrt_llm/common/ncclUtils.h. Including it here adds coupling and can trigger tooling issues (as seen in the static analysis hint).You can rely on the
.cppto include the NCCL utilities instead:-#include "tensorrt_llm/common/ncclUtils.h" -#include <torch/extension.h> -#include <vector> +#include <torch/extension.h> +#include <vector>This keeps the public declaration lightweight and avoids unnecessary rebuilds when NCCL internals change.
cpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cpp (1)
336-341: Confirm behavior whenmStrategy == NCCL_SYMMETRICin plugin usage
enqueue()treatsruntimeStrategy == NCCLandNCCL_SYMMETRICequivalently for execution, which is fine. However:
supportsFormatCombination()only treatsNCCLandUBas single‑input (base_inputs = 1);NCCL_SYMMETRICis grouped with the “other strategies” path (base_inputs = 2).- In the fused NCCL path,
fusion_ptr_idxnow treatsmStrategy == NCCL_SYMMETRICas the single‑input layout (fusion_ptr_idx = 1), assuming NCCL‑style inputs.If a network ever constructs this plugin with
strategy == NCCL_SYMMETRIC(not just AUTO falling back at runtime), the input layout assumptions betweensupportsFormatCombination()andenqueue()may diverge.Either:
- Treat
NCCL_SYMMETRIClikeNCCLeverywhere layout‑wise (includingsupportsFormatCombination’sbase_inputs), or- Explicitly ensure plugin construction never passes
NCCL_SYMMETRICasmStrategyand keep it as a runtime‑only selection.Please double‑check intended usage and adjust one side for consistency.
Also applies to: 360-365, 383-391
cpp/tensorrt_llm/thop/allreduceOp.cpp (2)
95-149: CheckgetLocalGroup’s use ofLOCAL_COMM_SESSIONin the manual branchIn the
elsebranch (whengroup.size() < localSize), this implementation usesLOCAL_COMM_SESSION.send/recvwith*group.begin()and other world‑rank values as the destination/source ranks. In the plugin implementation, the equivalent manual branch usesCOMM_SESSIONfor these operations, which matches the fact thatgroupis in world‑rank space (seecpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cpp:getLocalGroup). Based on that precedent, usingLOCAL_COMM_SESSIONhere may be incorrect on multi‑node setups whengroup.size() < localSize.Please double‑check that:
LOCAL_COMM_SESSIONis defined such that using world ranks as peers is valid in this code path, or- If not, consider switching these manual send/recv calls back to
COMM_SESSIONto mirror the plugin’s behavior and avoid rank mismatches.
986-1040: Align comments and cleanup inselectImplementation/ifFallbackToNCCLA couple of small inconsistencies here:
ifFallbackToNCCL’s comment says “If messageSize is less than maxWorkspaceSize, use NCCL_SYMMETRIC…”, but the condition ismessage_size_bytes > max_workspace_size || !mIsP2PSupported || !mIsNVLINKSupported, i.e., fallback when the message is larger than the workspace or topology is unsuitable. The comment should reflect the actual predicate.- The final
return AllReduceStrategyType::NCCL_SYMMETRIC;at the end ofselectImplementationis unreachable because all preceding branches already return.I’d suggest updating the comment to match the logic and removing the dead return for clarity. Optionally, renaming
ifFallbackToNCCLto something likeshouldFallbackToNCCLBasedStrategyorshouldUseNcclSymmetricFallbackwould better reflect the new behavior but isn’t strictly necessary.tests/unittest/_torch/multi_gpu/test_ncclwindowtensor.py (3)
46-57: Simplify dynamic access tocreate_nccl_window_tensorInside
_create_nccl_window_tensor, you can drop the nestedgetattrcalls with constant names:func = getattr(getattr(_torch, "ops"), "trtllm").create_nccl_window_tensorand just write:
func = _torch.ops.trtllm.create_nccl_window_tensorYou still avoid storing a module‑level reference to
torch.ops(the function is resolved at call time), but the code is clearer and avoids the Ruff B009 warning.
99-200: Unusedtensor_parallel_rankparameters in helper testsSeveral helpers (
run_window_tensor_creation_test,run_window_tensor_multiple_test,run_window_tensor_different_shapes_test,run_window_tensor_operations_test) accepttensor_parallel_rankbut don’t use it, since only the AllReduce test needs the rank forMapping.Given these functions are invoked through a common wrapper signature, the extra parameter is understandable. If you’d like to quiet Ruff’s
ARG001warnings without changing call sites, you can rename the parameter to_tensor_parallel_rankor add a trivial use like_ = tensor_parallel_rankwith a comment indicating it is kept for signature consistency.Also applies to: 186-200, 221-236
363-387: Optional: considerzip(strict=True)if Python version allowsThe MPIPoolExecutor tests build argument lists via patterns like:
results = mpi_pool_executor.map( run_single_rank_test, *zip( *[ ( tensor_parallel_size, run_window_tensor_creation_test, shape, dtype_str, tensor_parallel_size, None, ) ] * tensor_parallel_size ), )Because all iterables arise from repeating the same tuple
tensor_parallel_sizetimes, their lengths are guaranteed equal. To satisfy Ruff’s B905 and make mismatches explicit if these patterns evolve, you could addstrict=Truewhen your minimum supported Python version includes it (3.10+):*zip( *[ ... ] * tensor_parallel_size, strict=True, )If Python < 3.10 must remain supported, the current code is logically correct and can be left as is.
Also applies to: 393-420, 425-452, 455-482, 485-512
cpp/tensorrt_llm/common/ncclUtils.h (1)
197-228:NCCLWindowBuffervalidity semantics differ slightly fromUBBuffer– confirm intended behaviorCompared to
runtime::ub::UBBuffer,NCCLWindowBuffer::isValid()additionally requireswindow != nullptr(Line 215). This is stricter thanUBBuffer::invalid()(which ignores the window field) and will cause buffers with a null window to be treated as invalid even ifptr/handle/sizeare set.If that’s intentional (i.e., a buffer is only usable once it’s fully window‑registered), this is fine and the design is clear. If you expect to stage allocations before registration, you may want a separate predicate (e.g.,
isAllocated()vsisRegistered()) so callers can distinguish between “no memory” and “registration missing.”
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (20)
cpp/tensorrt_llm/common/customAllReduceUtils.h(2 hunks)cpp/tensorrt_llm/common/ncclUtils.cpp(1 hunks)cpp/tensorrt_llm/common/ncclUtils.h(1 hunks)cpp/tensorrt_llm/common/opUtils.cpp(2 hunks)cpp/tensorrt_llm/kernels/userbuffers/ub_allocator.cpp(1 hunks)cpp/tensorrt_llm/kernels/userbuffers/ub_allocator.h(0 hunks)cpp/tensorrt_llm/kernels/userbuffers/userbuffersManager.cpp(2 hunks)cpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cpp(5 hunks)cpp/tensorrt_llm/thop/CMakeLists.txt(1 hunks)cpp/tensorrt_llm/thop/allreduceOp.cpp(4 hunks)cpp/tensorrt_llm/thop/ncclWindowTensor.cpp(1 hunks)cpp/tensorrt_llm/thop/ncclWindowTensor.h(1 hunks)cpp/tests/unit_tests/multi_gpu/CMakeLists.txt(1 hunks)cpp/tests/unit_tests/multi_gpu/ncclUtilsTest.cpp(1 hunks)tensorrt_llm/_torch/pyexecutor/model_engine.py(1 hunks)tensorrt_llm/functional.py(1 hunks)tests/microbenchmarks/all_reduce.py(2 hunks)tests/scripts/allreduce_perf/allreduce_heuristic_code_gen.py(2 hunks)tests/unittest/_torch/multi_gpu/test_allreduce.py(1 hunks)tests/unittest/_torch/multi_gpu/test_ncclwindowtensor.py(1 hunks)
💤 Files with no reviewable changes (1)
- cpp/tensorrt_llm/kernels/userbuffers/ub_allocator.h
🧰 Additional context used
🧠 Learnings (25)
📓 Common learnings
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device implementation, NCCL version 2.28+ requirements are handled at runtime in the nccl_device/config layer rather than with compile-time guards. This allows the allreduceOp to remain version-agnostic and delegates version compatibility validation to the appropriate lower-level components that can gracefully handle unsupported configurations.
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device allreduce implementation (cpp/tensorrt_llm/thop/allreduceOp.cpp), the goto pattern in runNCCLAllReduceDeviceFusion is intentionally used for future extensibility, allowing multiple switch cases to fallback to the default handler. While not aesthetically ideal, this pattern supports adding more fusion cases later that can reuse the same fallback logic.
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: tests/unittest/_torch/multi_gpu/test_nccl_device.py:138-149
Timestamp: 2025-10-13T19:45:03.518Z
Learning: In test_nccl_device.py, the NCCL device AllReduce implementation compares the entire residual tensor on each rank, unlike the UB implementation which compares per-rank chunks. The residual chunking calculations in the test are intentionally overridden to reflect this design difference.
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7763
File: cpp/tensorrt_llm/CMakeLists.txt:297-301
Timestamp: 2025-09-16T09:30:09.716Z
Learning: In the TensorRT-LLM project, NCCL libraries are loaded earlier by PyTorch libraries or the bindings library, so the main shared library doesn't need NCCL paths in its RPATH - the libraries will already be available in the process address space when needed.
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7520
File: tensorrt_llm/_torch/pyexecutor/resource_manager.py:605-613
Timestamp: 2025-09-24T03:31:28.908Z
Learning: In TensorRT-LLM Ray orchestrator mode, ProcessGroups are initialized with both Gloo and NCCL backends (e.g., "cuda:nccl,cpu:gloo"), allowing PyTorch distributed to automatically route CPU tensors through Gloo and GPU tensors through NCCL. This eliminates the need for manual device placement when performing allreduce operations on base types.
Learnt from: timlee0212
Repo: NVIDIA/TensorRT-LLM PR: 6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:15-17
Timestamp: 2025-09-23T15:01:00.070Z
Learning: In TensorRT-LLM NCCL device kernels, the <sstream> header is not needed as an explicit include in config.cu because it's provided transitively through other headers. Local compilation testing confirms this works without the explicit include.
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:15-17
Timestamp: 2025-09-23T15:01:00.070Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/config.cu), std::ostringstream is used but <sstream> doesn't need to be explicitly included because it's provided transitively through other headers like tensorrt_llm/common/cudaUtils.h or config.h. Local compilation testing confirms this works without the explicit include.
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp:399-417
Timestamp: 2025-08-21T21:48:35.135Z
Learning: CUTLASS extensions in TensorRT-LLM (located under cpp/tensorrt_llm/cutlass_extensions/) are designed to integrate with and extend functionality in the external CUTLASS repository. When analyzing these extensions, their consumers and functionality wiring may exist in the CUTLASS codebase rather than within TensorRT-LLM itself.
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
📚 Learning: 2025-09-23T15:01:00.070Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:15-17
Timestamp: 2025-09-23T15:01:00.070Z
Learning: In TensorRT-LLM NCCL device kernels, the <sstream> header is not needed as an explicit include in config.cu because it's provided transitively through other headers. Local compilation testing confirms this works without the explicit include.
Applied to files:
cpp/tensorrt_llm/thop/CMakeLists.txttensorrt_llm/_torch/pyexecutor/model_engine.pycpp/tests/unit_tests/multi_gpu/CMakeLists.txtcpp/tensorrt_llm/kernels/userbuffers/ub_allocator.cppcpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cppcpp/tensorrt_llm/common/opUtils.cppcpp/tensorrt_llm/thop/ncclWindowTensor.hcpp/tensorrt_llm/common/customAllReduceUtils.hcpp/tests/unit_tests/multi_gpu/ncclUtilsTest.cpptensorrt_llm/functional.pycpp/tensorrt_llm/thop/allreduceOp.cppcpp/tensorrt_llm/common/ncclUtils.cppcpp/tensorrt_llm/kernels/userbuffers/userbuffersManager.cppcpp/tensorrt_llm/common/ncclUtils.h
📚 Learning: 2025-09-23T15:01:00.070Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:15-17
Timestamp: 2025-09-23T15:01:00.070Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/config.cu), std::ostringstream is used but <sstream> doesn't need to be explicitly included because it's provided transitively through other headers like tensorrt_llm/common/cudaUtils.h or config.h. Local compilation testing confirms this works without the explicit include.
Applied to files:
cpp/tensorrt_llm/thop/CMakeLists.txtcpp/tensorrt_llm/kernels/userbuffers/ub_allocator.cppcpp/tensorrt_llm/thop/ncclWindowTensor.cppcpp/tensorrt_llm/common/opUtils.cppcpp/tensorrt_llm/thop/ncclWindowTensor.hcpp/tensorrt_llm/common/customAllReduceUtils.hcpp/tensorrt_llm/thop/allreduceOp.cppcpp/tensorrt_llm/common/ncclUtils.cppcpp/tensorrt_llm/kernels/userbuffers/userbuffersManager.cppcpp/tensorrt_llm/common/ncclUtils.h
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device implementation, NCCL version 2.28+ requirements are handled at runtime in the nccl_device/config layer rather than with compile-time guards. This allows the allreduceOp to remain version-agnostic and delegates version compatibility validation to the appropriate lower-level components that can gracefully handle unsupported configurations.
Applied to files:
cpp/tensorrt_llm/thop/CMakeLists.txttensorrt_llm/_torch/pyexecutor/model_engine.pycpp/tests/unit_tests/multi_gpu/CMakeLists.txttests/unittest/_torch/multi_gpu/test_allreduce.pycpp/tensorrt_llm/kernels/userbuffers/ub_allocator.cppcpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cppcpp/tensorrt_llm/thop/ncclWindowTensor.cppcpp/tensorrt_llm/common/opUtils.cppcpp/tensorrt_llm/thop/ncclWindowTensor.hcpp/tensorrt_llm/common/customAllReduceUtils.htests/microbenchmarks/all_reduce.pycpp/tests/unit_tests/multi_gpu/ncclUtilsTest.cpptensorrt_llm/functional.pycpp/tensorrt_llm/thop/allreduceOp.cppcpp/tensorrt_llm/common/ncclUtils.cppcpp/tensorrt_llm/kernels/userbuffers/userbuffersManager.cppcpp/tensorrt_llm/common/ncclUtils.h
📚 Learning: 2025-09-16T09:30:09.716Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7763
File: cpp/tensorrt_llm/CMakeLists.txt:297-301
Timestamp: 2025-09-16T09:30:09.716Z
Learning: In the TensorRT-LLM project, NCCL libraries are loaded earlier by PyTorch libraries or the bindings library, so the main shared library doesn't need NCCL paths in its RPATH - the libraries will already be available in the process address space when needed.
Applied to files:
cpp/tensorrt_llm/thop/CMakeLists.txttensorrt_llm/_torch/pyexecutor/model_engine.pycpp/tests/unit_tests/multi_gpu/CMakeLists.txtcpp/tensorrt_llm/kernels/userbuffers/ub_allocator.cppcpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cppcpp/tensorrt_llm/thop/ncclWindowTensor.cppcpp/tensorrt_llm/common/opUtils.cppcpp/tensorrt_llm/thop/ncclWindowTensor.hcpp/tensorrt_llm/common/customAllReduceUtils.htensorrt_llm/functional.pycpp/tensorrt_llm/thop/allreduceOp.cppcpp/tensorrt_llm/common/ncclUtils.cppcpp/tensorrt_llm/common/ncclUtils.h
📚 Learning: 2025-10-13T19:45:03.518Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: tests/unittest/_torch/multi_gpu/test_nccl_device.py:138-149
Timestamp: 2025-10-13T19:45:03.518Z
Learning: In test_nccl_device.py, the NCCL device AllReduce implementation compares the entire residual tensor on each rank, unlike the UB implementation which compares per-rank chunks. The residual chunking calculations in the test are intentionally overridden to reflect this design difference.
Applied to files:
cpp/tensorrt_llm/thop/CMakeLists.txttensorrt_llm/_torch/pyexecutor/model_engine.pytests/unittest/_torch/multi_gpu/test_allreduce.pycpp/tensorrt_llm/kernels/userbuffers/ub_allocator.cppcpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cppcpp/tensorrt_llm/common/opUtils.cppcpp/tensorrt_llm/common/customAllReduceUtils.htests/microbenchmarks/all_reduce.pytests/scripts/allreduce_perf/allreduce_heuristic_code_gen.pycpp/tests/unit_tests/multi_gpu/ncclUtilsTest.cpptensorrt_llm/functional.pytests/unittest/_torch/multi_gpu/test_ncclwindowtensor.pycpp/tensorrt_llm/thop/allreduceOp.cppcpp/tensorrt_llm/common/ncclUtils.cppcpp/tensorrt_llm/common/ncclUtils.h
📚 Learning: 2025-09-23T14:58:05.372Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.
Applied to files:
cpp/tensorrt_llm/thop/CMakeLists.txttensorrt_llm/_torch/pyexecutor/model_engine.pycpp/tensorrt_llm/kernels/userbuffers/ub_allocator.cppcpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cppcpp/tensorrt_llm/thop/ncclWindowTensor.cppcpp/tensorrt_llm/common/opUtils.cppcpp/tensorrt_llm/thop/ncclWindowTensor.hcpp/tensorrt_llm/common/customAllReduceUtils.hcpp/tests/unit_tests/multi_gpu/ncclUtilsTest.cpptensorrt_llm/functional.pycpp/tensorrt_llm/thop/allreduceOp.cppcpp/tensorrt_llm/common/ncclUtils.cppcpp/tensorrt_llm/kernels/userbuffers/userbuffersManager.cppcpp/tensorrt_llm/common/ncclUtils.h
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device allreduce implementation (cpp/tensorrt_llm/thop/allreduceOp.cpp), the goto pattern in runNCCLAllReduceDeviceFusion is intentionally used for future extensibility, allowing multiple switch cases to fallback to the default handler. While not aesthetically ideal, this pattern supports adding more fusion cases later that can reuse the same fallback logic.
Applied to files:
cpp/tensorrt_llm/thop/CMakeLists.txttensorrt_llm/_torch/pyexecutor/model_engine.pytests/unittest/_torch/multi_gpu/test_allreduce.pycpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cppcpp/tensorrt_llm/common/opUtils.cppcpp/tensorrt_llm/common/customAllReduceUtils.htests/microbenchmarks/all_reduce.pytests/scripts/allreduce_perf/allreduce_heuristic_code_gen.pytensorrt_llm/functional.pycpp/tensorrt_llm/thop/allreduceOp.cppcpp/tensorrt_llm/common/ncclUtils.cppcpp/tensorrt_llm/common/ncclUtils.h
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
Repo: NVIDIA/TensorRT-LLM PR: 6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.
Applied to files:
tensorrt_llm/_torch/pyexecutor/model_engine.pytests/unittest/_torch/multi_gpu/test_allreduce.pycpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cppcpp/tensorrt_llm/common/customAllReduceUtils.htensorrt_llm/functional.pycpp/tensorrt_llm/thop/allreduceOp.cpp
📚 Learning: 2025-09-24T03:31:28.908Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7520
File: tensorrt_llm/_torch/pyexecutor/resource_manager.py:605-613
Timestamp: 2025-09-24T03:31:28.908Z
Learning: In TensorRT-LLM Ray orchestrator mode, ProcessGroups are initialized with both Gloo and NCCL backends (e.g., "cuda:nccl,cpu:gloo"), allowing PyTorch distributed to automatically route CPU tensors through Gloo and GPU tensors through NCCL. This eliminates the need for manual device placement when performing allreduce operations on base types.
Applied to files:
tensorrt_llm/_torch/pyexecutor/model_engine.pycpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cppcpp/tensorrt_llm/thop/ncclWindowTensor.cppcpp/tensorrt_llm/common/opUtils.cppcpp/tensorrt_llm/thop/ncclWindowTensor.htensorrt_llm/functional.pycpp/tensorrt_llm/thop/allreduceOp.cppcpp/tensorrt_llm/common/ncclUtils.cppcpp/tensorrt_llm/common/ncclUtils.h
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.
Applied to files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
📚 Learning: 2025-08-26T06:07:02.166Z
Learnt from: shaharmor98
Repo: NVIDIA/TensorRT-LLM PR: 7231
File: tensorrt_llm/_torch/pyexecutor/_util.py:504-509
Timestamp: 2025-08-26T06:07:02.166Z
Learning: In tensorrt_llm/_torch/pyexecutor/_util.py, when calling model_engine.set_lora_model_config(), pass model_binding_config.mlp_hidden_size directly without multiplying by mapping.tp_size, as the mlp_hidden_size from get_bindings_model_config() is already the per-TP rank value needed for LoRA weight packaging.
Applied to files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
📚 Learning: 2025-09-22T19:25:45.607Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/userbuffers/ub_allocator.cpp:170-179
Timestamp: 2025-09-22T19:25:45.607Z
Learning: In NCCLUserBufferAllocator::getNCCLDevComm(), multimem support is hard-coded to true because multimem is required for this function. The caller is responsible for ensuring multimem is available before calling this function - it should not be called if multimem is not supported.
Applied to files:
tensorrt_llm/_torch/pyexecutor/model_engine.pycpp/tensorrt_llm/kernels/userbuffers/ub_allocator.cppcpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cppcpp/tensorrt_llm/common/opUtils.cppcpp/tests/unit_tests/multi_gpu/ncclUtilsTest.cppcpp/tensorrt_llm/thop/allreduceOp.cppcpp/tensorrt_llm/common/ncclUtils.cppcpp/tensorrt_llm/kernels/userbuffers/userbuffersManager.cppcpp/tensorrt_llm/common/ncclUtils.h
📚 Learning: 2025-09-02T13:42:44.885Z
Learnt from: pcastonguay
Repo: NVIDIA/TensorRT-LLM PR: 7455
File: tensorrt_llm/_torch/pyexecutor/py_executor.py:1852-1860
Timestamp: 2025-09-02T13:42:44.885Z
Learning: In MPI communication within TensorRT-LLM pipeline parallelism, different communication types (tokens, logits, termination sync) must use disjoint tag namespaces to avoid message routing collisions when using the same source/destination patterns.
Applied to files:
cpp/tensorrt_llm/kernels/userbuffers/ub_allocator.cppcpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cppcpp/tensorrt_llm/common/opUtils.cppcpp/tensorrt_llm/kernels/userbuffers/userbuffersManager.cpp
📚 Learning: 2025-08-15T06:46:54.897Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:54.897Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp addToken function, newly allocated blocks are unshared by design. The beam search path in addToken (when sequence.getNumTokens() > windowSize) is currently broken/non-functional with SWA, so the block allocation doesn't follow a shared-then-unshared pattern.
Applied to files:
cpp/tensorrt_llm/kernels/userbuffers/ub_allocator.cppcpp/tensorrt_llm/common/ncclUtils.h
📚 Learning: 2025-08-08T05:10:38.906Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp:0-0
Timestamp: 2025-08-08T05:10:38.906Z
Learning: The ScaledAccPerRowBiasPerColScaleScatter fusion in CUTLASS extensions (cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp) is specifically designed for per-column scaling factors only, so it uses a fixed Stride<_0,_1,int64_t> rather than conditional stride logic.
Applied to files:
cpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cppcpp/tensorrt_llm/common/customAllReduceUtils.hcpp/tensorrt_llm/thop/allreduceOp.cpp
📚 Learning: 2025-09-23T15:13:48.819Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/multimem.h:20-30
Timestamp: 2025-09-23T15:13:48.819Z
Learning: TRT-LLM targets modern CUDA toolkits that support FP8 datatypes, so cuda_fp8.h can be included unconditionally without version guards in TRT-LLM code.
Applied to files:
cpp/tensorrt_llm/common/opUtils.cppcpp/tensorrt_llm/common/customAllReduceUtils.hcpp/tensorrt_llm/thop/allreduceOp.cpp
📚 Learning: 2025-08-21T21:48:35.135Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp:399-417
Timestamp: 2025-08-21T21:48:35.135Z
Learning: CUTLASS extensions in TensorRT-LLM (located under cpp/tensorrt_llm/cutlass_extensions/) are designed to integrate with and extend functionality in the external CUTLASS repository. When analyzing these extensions, their consumers and functionality wiring may exist in the CUTLASS codebase rather than within TensorRT-LLM itself.
Applied to files:
cpp/tensorrt_llm/common/opUtils.cpp
📚 Learning: 2025-08-21T09:41:49.347Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:2010-2045
Timestamp: 2025-08-21T09:41:49.347Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is specifically for updating bookkeeping when blocks are added during the context phase, not for refreshing offsets after detach operations. During detach operations, GenerationRequest::removeFrontBlock handles the necessary cache block bookkeeping internally.
Applied to files:
cpp/tensorrt_llm/common/opUtils.cpp
📚 Learning: 2025-08-08T05:06:31.596Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp:36-36
Timestamp: 2025-08-08T05:06:31.596Z
Learning: CUTLASS extension files (under cpp/tensorrt_llm/cutlass_extensions/) follow CUTLASS coding style conventions, including using #pragma once instead of TRTLLM_ prefixed header guards, even though they are .hpp files.
Applied to files:
cpp/tensorrt_llm/thop/ncclWindowTensor.h
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.
Applied to files:
tensorrt_llm/functional.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tests/unittest/_torch/multi_gpu/test_ncclwindowtensor.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
tests/unittest/_torch/multi_gpu/test_ncclwindowtensor.py
📚 Learning: 2025-08-20T06:56:02.889Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:577-579
Timestamp: 2025-08-20T06:56:02.889Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, maxSequenceLength is now enforced as a non-optional argument in the BlockManager constructor, so concerns about std::nullopt defaulting to 0 are not applicable. When windowSize > maxSequenceLength, a warning should be added instead of handling optional parameter cases.
Applied to files:
cpp/tensorrt_llm/kernels/userbuffers/userbuffersManager.cpp
📚 Learning: 2025-08-21T02:39:12.009Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1475-1480
Timestamp: 2025-08-21T02:39:12.009Z
Learning: The min latency mode functionality in TensorRT-LLM MOE kernels (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu) is deprecated and no longer being maintained/updated, as confirmed by djns99. Bug reports and optimization suggestions for the computeStridesTmaWarpSpecializedLowLatencyKernel and related min latency code paths should be deprioritized.
Applied to files:
cpp/tensorrt_llm/kernels/userbuffers/userbuffersManager.cpp
🧬 Code graph analysis (11)
tests/unittest/_torch/multi_gpu/test_allreduce.py (1)
tensorrt_llm/functional.py (1)
AllReduceStrategy(3876-3885)
cpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cpp (2)
cpp/tensorrt_llm/common/customAllReduceUtils.h (1)
getMaxRequiredWorkspaceSize(34-45)cpp/tensorrt_llm/thop/allreduceOp.cpp (1)
rank(873-984)
cpp/tensorrt_llm/thop/ncclWindowTensor.cpp (1)
cpp/tests/unit_tests/multi_gpu/ncclUtilsTest.cpp (3)
comm(81-108)comm(208-241)comm(536-569)
cpp/tensorrt_llm/common/opUtils.cpp (1)
cpp/tensorrt_llm/common/ncclUtils.cpp (6)
getInstance(34-38)getInstance(34-34)getInstance(122-126)getInstance(122-122)getInstance(244-248)getInstance(244-244)
cpp/tensorrt_llm/thop/ncclWindowTensor.h (1)
cpp/tensorrt_llm/thop/ncclWindowTensor.cpp (2)
create_nccl_window_tensor(23-42)create_nccl_window_tensor(23-24)
tests/microbenchmarks/all_reduce.py (1)
tensorrt_llm/functional.py (1)
AllReduceStrategy(3876-3885)
cpp/tests/unit_tests/multi_gpu/ncclUtilsTest.cpp (2)
cpp/tensorrt_llm/common/opUtils.cpp (2)
getComm(76-147)getComm(76-76)cpp/tensorrt_llm/common/ncclUtils.cpp (6)
getInstance(34-38)getInstance(34-34)getInstance(122-126)getInstance(122-122)getInstance(244-248)getInstance(244-244)
tests/unittest/_torch/multi_gpu/test_ncclwindowtensor.py (4)
tensorrt_llm/_torch/distributed/ops.py (1)
AllReduce(554-710)tensorrt_llm/functional.py (1)
AllReduceStrategy(3876-3885)tensorrt_llm/mapping.py (1)
Mapping(351-510)tests/unittest/conftest.py (1)
mpi_pool_executor(246-254)
cpp/tensorrt_llm/thop/allreduceOp.cpp (2)
cpp/tests/unit_tests/multi_gpu/ncclUtilsTest.cpp (3)
comm(81-108)comm(208-241)comm(536-569)cpp/tensorrt_llm/common/ncclUtils.cpp (6)
getInstance(34-38)getInstance(34-34)getInstance(122-126)getInstance(122-122)getInstance(244-248)getInstance(244-244)
cpp/tensorrt_llm/common/ncclUtils.cpp (2)
cpp/tests/unit_tests/multi_gpu/ncclUtilsTest.cpp (3)
comm(81-108)comm(208-241)comm(536-569)cpp/tensorrt_llm/common/ncclUtils.h (3)
NCCLHelper(56-167)isLoaded(72-143)NCCLWindowBuffer(198-300)
cpp/tensorrt_llm/common/ncclUtils.h (2)
cpp/tensorrt_llm/kernels/userbuffers/ub_allocator.h (1)
tensorrt_llm(26-79)cpp/tensorrt_llm/kernels/userbuffers/userbuffersManager.cpp (4)
comm(102-105)comm(102-102)ptr(22-25)ptr(22-22)
🪛 Clang (14.0.6)
cpp/tensorrt_llm/thop/ncclWindowTensor.h
[error] 18-18: 'tensorrt_llm/common/ncclUtils.h' file not found
(clang-diagnostic-error)
cpp/tests/unit_tests/multi_gpu/ncclUtilsTest.cpp
[error] 17-17: 'tensorrt_llm/common/ncclUtils.h' file not found
(clang-diagnostic-error)
cpp/tensorrt_llm/common/ncclUtils.cpp
[error] 17-17: 'tensorrt_llm/common/ncclUtils.h' file not found
(clang-diagnostic-error)
cpp/tensorrt_llm/common/ncclUtils.h
[error] 18-18: 'tensorrt_llm/common/assert.h' file not found
(clang-diagnostic-error)
🪛 Ruff (0.14.5)
tests/unittest/_torch/multi_gpu/test_ncclwindowtensor.py
56-56: Do not call getattr with a constant attribute value. It is not any safer than normal property access.
Replace getattr with attribute access
(B009)
56-56: Do not call getattr with a constant attribute value. It is not any safer than normal property access.
Replace getattr with attribute access
(B009)
104-104: Unused function argument: tensor_parallel_rank
(ARG001)
145-145: Unused function argument: tensor_parallel_rank
(ARG001)
161-161: Loop control variable i not used within loop body
(B007)
190-190: Unused function argument: tensor_parallel_rank
(ARG001)
226-226: Unused function argument: tensor_parallel_rank
(ARG001)
375-387: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
406-419: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
437-449: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
467-479: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
497-509: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (14)
cpp/tensorrt_llm/common/customAllReduceUtils.h (1)
146-155: LUT bounds-check fallback toNCCL_SYMMETRIClooks consistentThe extended bounds check before indexing
AllReduceBestStrategyTableand falling back toAllReduceStrategyType::NCCL_SYMMETRICfor out-of-range entries is safe and aligns with the PR’s new default symmetric strategy. The short-circuiting||chain correctly avoids anyat()calls whensm_versionis missing.tests/scripts/allreduce_perf/allreduce_heuristic_code_gen.py (1)
29-34: Enum mapping and default LUT initialization align with NCCL_SYMMETRIC fallbackAdding
'NCCL_SYMMETRIC': 8tostrategy_name_to_enumand initializingstrategy_tablewithConstants.strategy_name_to_enum['NCCL_SYMMETRIC']keeps the Python generator consistent with theAllReduceStrategyIntEnum (whereNCCL_SYMMETRIC == 8) and with the C++ fallback you added. When regenerating LUTs, missing or filtered entries will now correctly default to NCCL_SYMMETRIC.Also applies to: 88-92
tests/unittest/_torch/multi_gpu/test_allreduce.py (1)
114-131: Test now targets NCCL_SYMMETRIC path as intendedSwitching
Linear(..., allreduce_strategy=AllReduceStrategy.NCCL_SYMMETRIC)ensures the fusion tests exercise the new default NCCL_SYMMETRIC strategy instead of the legacy NCCL path, matching the rest of the PR’s behavior change. No issues spotted.tests/microbenchmarks/all_reduce.py (1)
243-249: Expanded pandas display options are fine for this CLI benchmarkThe additional
pd.set_optioncalls to show all columns and avoid wrapping are appropriate for human inspection of the results on rank 0 and remain scoped to this script.cpp/tensorrt_llm/thop/CMakeLists.txt (1)
37-107:ncclWindowTensor.cppcorrectly wired intoth_commonAdding
ncclWindowTensor.cppto theth_commonsources is the right place for exposing the new Torch op alongside the other thop kernels.cpp/tests/unit_tests/multi_gpu/CMakeLists.txt (1)
17-24: NewncclUtilsTesttarget wiring looks correctThe new
ncclUtilsTestgtest target and its conditional linkage to Python and Torch libraries underBUILD_PYTare consistent with the existing multi-GPU test setup.cpp/tensorrt_llm/kernels/userbuffers/ub_allocator.cpp (1)
23-27: SimplifiedUserBufferAllocator::Instance()singleton is appropriateReturning a single function-local static
UserBufferAllocatorinstance matches the removal of the NCCL-specific allocator path and keeps the API straightforward and thread-safe.cpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cpp (1)
215-240: Fallback behavior updated cleanly to NCCL_SYMMETRICThe selection heuristics and logging for non‑P2P, non‑NVLINK, and oversized workspace cases now consistently fall back to
AllReduceStrategyType::NCCL_SYMMETRIC, with appropriate deterministic‑mode warnings. This looks correct and matches the high‑level goal of changing the default fallback from NCCL to NCCL_SYMMETRIC.Also applies to: 272-316
cpp/tensorrt_llm/thop/allreduceOp.cpp (2)
18-24: Verify header wiring forncclUtils.hIncluding
"tensorrt_llm/common/ncclUtils.h"here is expected given the new NCCL window utilities, but clang static analysis is reportingfile not found. Please confirm that:
ncclUtils.his added to the relevant target’s include paths, and- The CMake target that builds this file depends on the library exposing that header.
This may just be a stale compile database, but it’s worth checking the build configuration.
270-305: Runtime strategy dispatch including NCCL_SYMMETRIC looks consistentThe
run()method’s switch now cleanly dispatchesAllReduceStrategyType::NCCL_SYMMETRICtorunNCCLAllReduceSymmetric, leaving other strategies unchanged. This aligns with the updated heuristic selection and keeps UB/LOWPRECISION/fusion paths intact.cpp/tests/unit_tests/multi_gpu/ncclUtilsTest.cpp (1)
41-72: Good coverage of communicator resource lifecycle and window allocator behaviorThe
createSplitCommhelper and associated tests do a nice job of:
- Mirroring
getComm’s deleter pattern (resource cleanup viaNcclCommResourceManagerbeforencclCommDestroy),- Verifying registration, ordering, and count semantics of
NcclCommResourceManager, and- Exercising
NCCLWindowAllocatoracross basic allocation, reuse, best‑fit selection, scoped buffers, multi‑comm isolation, and cleanup on comm destruction, plus PyTorchcreateNCCLWindowTensorintegration when enabled.This should give good confidence that the new NCCL utility layer behaves correctly across realistic communicator lifecycles.
Also applies to: 121-199, 205-527
cpp/tensorrt_llm/common/ncclUtils.cpp (1)
244-293: NCCL window allocator implementation and cleanup logic look sound post‑fixAside from the symbol‑loading issues above, the core of
NCCLWindowAllocatorlooks solid:
requestBuffer:
- Uses a per‑comm best‑fit search to reuse the smallest adequate buffer.
- Registers a per‑comm cleanup callback only once via
NcclCommResourceManager.- Pool bookkeeping:
getBufferCountandgetBufferInUseCountcorrectly reflect pool vs in‑use counts.releaseBufferandScopedNCCLWindowBuffercooperate to return buffers to the pool.- Cleanup:
cleanupBuffersForCommsynchronizes the device, deregisters all windows viancclCommWindowDeregister, and frees device memory viancclMemFree, with warnings logged on errors and then erases all state for that comm.Given the unit tests in
cpp/tests/unit_tests/multi_gpu/ncclUtilsTest.cpp, this design should behave correctly under comm teardown and buffer reuse once the symbol‑loading guardrails are in place.Also applies to: 447-565
cpp/tensorrt_llm/common/ncclUtils.h (2)
54-96: Dynamic NCCL helper interface looks good and aligns with runtime symbol loading designThe
NCCLHelperinterface (singleton, typed function pointers forncclCommWindowRegisterandncclMemAlloc,isLoaded()query) matches the project’s pattern of deferring NCCL feature detection to runtime, and it fits well with the existing “NCCL library already loaded by PyTorch/bindings” assumption. The header side looks clean and self‑contained; correctness will hinge on the.cppimplementation ofloadNCCLLibraryand symbol resolution.
232-300: Window allocator cleanup robustness confirmed—no action requiredThe implementation correctly handles communicator reuse. The
cleanupBuffersForCommfunction erases the communicator from bothmBufferPool(line 562) andmRegisteredComms(line 563), ensuring that when NCCL reuses ancclComm_taddress, the old cleanup records are properly removed. TheregisterBufferCleanupguard check at line 450 prevents duplicate registrations for new communicators at reused addresses.
15373a9 to
429dcaa
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #25764 [ run ] triggered by Bot. Commit: |
|
/bot run --disable-fail-fast |
|
PR_Github #25772 [ run ] triggered by Bot. Commit: |
|
PR_Github #25764 [ run ] completed with state |
|
PR_Github #25772 [ run ] completed with state |
|
/bot run --disable-fail-fast |
1 similar comment
|
/bot run --disable-fail-fast |
81e4909 to
937a373
Compare
|
PR_Github #25786 [ run ] triggered by Bot. Commit: |
|
PR_Github #25786 [ run ] completed with state |
|
/bot run --disable-fail-fast --reuse-test |
|
/bot help |
GitHub Bot Help
Provide a user friendly way for developers to interact with a Jenkins server. Run See details below for each supported subcommand. Details
Launch build/test pipelines. All previously running jobs will be killed.
kill
Kill all running builds associated with pull request. skip
Skip testing for latest commit on pull request. reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break. |
|
/bot run --disable-fail-fast |
1 similar comment
|
/bot run --disable-fail-fast |
|
PR_Github #25871 [ run ] triggered by Bot. Commit: |
581621c to
f4a0f84
Compare
Signed-off-by: Ludwig Schneider <lschneider@nvidia.com>
Signed-off-by: Ludwig Schneider <lschneider@nvidia.com>
Signed-off-by: Ludwig Schneider <lschneider@nvidia.com>
Signed-off-by: Ludwig Schneider <lschneider@nvidia.com>
Signed-off-by: Ludwig Schneider <lschneider@nvidia.com>
Signed-off-by: Ludwig Schneider <lschneider@nvidia.com>
3d13eac to
38a9c5d
Compare
|
Rebasing and rerunning CI, since the failing test is now waived. /bot run --disable-fail-fast |
|
PR_Github #26959 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #27012 [ run ] triggered by Bot. Commit: |
|
PR_Github #27012 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #27064 [ run ] triggered by Bot. Commit: |
|
PR_Github #27064 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #27157 [ run ] triggered by Bot. Commit: |
|
PR_Github #27157 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #27212 [ run ] triggered by Bot. Commit: |
|
PR_Github #27212 [ run ] completed with state |
…VIDIA#8779) The performance results of some kernels could be easily affected by the warm/cold L2 cache status. To achieve more precise profiling results, the L2 cache is cleared for every execution by the circular buffer method for better benchmarking during autotuning. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][infra] Waive failed cases for main branch on 11/25 (NVIDIA#9429) Signed-off-by: qqiao <qqiao@nvidia.com> [NVIDIA#8391][chore] test_perf.py to lock clocks read from gpu_configs.yml instead of max freq (NVIDIA#9409) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [None][ci] Move more test stages to use OCI machines (NVIDIA#9395) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> Co-authored-by: Matt Lefebvre <matthewelefebvre@gmail.com> [None][feat] Improve TRTLLM MoE in small hidden size throughput cases (NVIDIA#9377) Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com> [https://nvbugs/5537996][fix] Let KV cache manager block initialization be aware whether it is doing a dry run or not (NVIDIA#9093) Before this commit, the kv cache manager does the same regardless, which causes a mis-calculation in free memory available to allocate for the KV cache manager, hence causing a crash. This commit fixes this by letting KV cache manager initialization be aware whether it is doing the dry run or not. If it is a dry run, use the max_tokens setting that is already pre-calculated and filled into kv_cache_config.max_tokens. Signed-off-by: eopXD <yuehtingc@nvidia.com> [https://nvbugs/5667922][fix] Update long context evaluation config (NVIDIA#9426) Signed-off-by: mni <125171826+baize97@users.noreply.github.com> [None][fix] Mitigate test timeout issues (NVIDIA#9445) Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> [None][chore] Fix trtllm-eval for PyTorchLLM (NVIDIA#9427) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [None][feat] Add a parser to layer-wise benchmarks (NVIDIA#9440) Signed-off-by: Tailing Yuan <yuantailing@gmail.com> [None][feat] Support custom chat template for tool calling (NVIDIA#9297) Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> [TRTLLM-8160][feat] Add draft token tree runtime on CDL (NVIDIA#8586) Signed-off-by: Yue Weng <25103990+yweng0828@users.noreply.github.com> [None][ci] waive a test (NVIDIA#9458) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> [https://nvbugs/5680905][fix] Relax the MMLU accuracy requirement for DS-v3.2 (NVIDIA#9439) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [TRTLLM-8376][feat] top-p optimization (removes redundant softmax) (NVIDIA#9411) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [TRTLLM-9490][feat] use FlashInfer's top_k_sampling_from_probs (NVIDIA#9457) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [https://nvbugs/5647400] [fix] Enlarged the AllReduce workspace size to 64MB. Added AllReduce strategy to AD config. (NVIDIA#9145) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode (NVIDIA#9308) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [None][chore] AutoDeploy add multi stream moe pass to default.yaml (NVIDIA#9430) Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> [https://nvbugs/5685143][fix] avoid cudaFree overlap with cuda graph (NVIDIA#9438) Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com> [None][chore] Bump version to 1.2.0rc5 (NVIDIA#9455) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [TRTLLM-8936][test] Add disagg and wideep multi-node multi-gpu test cases (NVIDIA#9356) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [None][ci] move some slow test cases of DGX-B200 to post merge (NVIDIA#9467) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (NVIDIA#9224) Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-9264][fix] Add accuracy/unit tests/doc for phi4mm (NVIDIA#9246) Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> [https://nvbugs/5580099][fix] Cherry pick IMA issue fix from release/1.1 (NVIDIA#9032) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][chore] Upgrade CuteDSL to 4.3.0 (NVIDIA#9444) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [None][feat] Support MLA chunked prefill for DeepSeek V3.2 model (NVIDIA#9376) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [None][feat] Add environment variable to force spec-dec number of accepted tokens (NVIDIA#9371) Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com> [None][infra] Update allowed list 2025.11.25 (NVIDIA#9468) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][infra] Fail the pipeline when slurm ssh dropped (NVIDIA#9157) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][feat] AutoDeploy: Remove redundant copies in mamba layers (NVIDIA#9461) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> Co-authored-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> [None][feat] AutoDeploy: Add A_log fusion for Mamba layers (NVIDIA#9422) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> [None][ci] Waive blackwell test on spec gate. (NVIDIA#9502) Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com> [https://nvbugs/5608930][fix] Fix a typo (NVIDIA#9487) Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> [NVIDIA#9463][feat] Add revision option to trtllm commands (NVIDIA#9498) Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com> [TRTLLM-9085][doc] fix math formula rendering issues (NVIDIA#9481) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [None][chore] update comments in llm_args.py (NVIDIA#9472) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [https://nvbugs/5680310][fix] Fix ctx only timed out test (NVIDIA#9410) Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com> [https://nvbugs/5547414][fix] enable case after using local cache model (NVIDIA#9473) Signed-off-by: Hui Gao <huig@nvidia.com> [None][fix] Replace PYTORCH_CUDA_ALLOC_CONF with PYTORCH_ALLOC_CONF to fix deprecation warning (NVIDIA#9294) Signed-off-by: Jiagan Cheng <jiaganc@nvidia.com> [https://nvbugs/5698581][fix] Init draft tokens for CUDA graph dummy request (NVIDIA#9505) Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com> [None][infra] Waive failed case in pre-merge on 11/27 (NVIDIA#9507) Signed-off-by: qqiao <qqiao@nvidia.com> [TRTLLM-9513][docs] Qwen3 deployment guide (NVIDIA#9488) Signed-off-by: Lanyu Liao <laliao@laliao-mlt.client.nvidia.com> Co-authored-by: Lanyu Liao <laliao@laliao-mlt.client.nvidia.com> [None][chore] revert batch_size=1 to prevent timeout and lower accuracy reference by 0.12% as a WAR (NVIDIA#9447) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> Co-authored-by: Shi Xiaowei <39303645+Shixiaowei02@users.noreply.github.com> [TRTLLM-9279][infra] Use flexcache for gh200 nodes since they locate in Austin (NVIDIA#9405) Signed-off-by: qqiao <qqiao@nvidia.com> Signed-off-by: Emma Qiao <qqiao@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [cherry-pick][https://nvbugs/5670793][fix] Solve trtllm-serve launch_disaggregated issue (NVIDIA#9346) Signed-off-by: xxi <xxi@nvidia.com> [None][infra] Fix Slurm job script (NVIDIA#9508) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][fix] change allreduce workspace dtype to torch.int64 to avoid overflow (NVIDIA#9479) Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com> [None][feat] add qwen3-next CI test of accuracy on BF16 and NVFP4 (NVIDIA#9330) Signed-off-by: jiant <107457950+JadoTu@users.noreply.github.com> [None][fix] fix TP support for DeepSeek-V3.2 on hopper (NVIDIA#9484) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [TRTLLM-9389][chore] Refactor AlltoallMethodType. (NVIDIA#9388) Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com> [https://nvbugs/5674665][chore] Add test coverage for https://nvbugspro.nvidia.com/bug/5674665 (NVIDIA#9518) Signed-off-by: eopXD <yuehtingc@nvidia.com> [TRTLLM-7288][infra] Download merged waive list in slurm script (NVIDIA#8999) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [https://nvbugs/5687820][fix] Remove self.abort() in DetokenizedGenerationResult (NVIDIA#9449) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [NVIDIA#9150][feat] AutoDeploy Nemotron-Flash support (NVIDIA#9504) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [None] [chore] Update to cutlass 4.3 (NVIDIA#8637) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5637037][chore] Update waive lists. (NVIDIA#9386) Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com> Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> Co-authored-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-8970][infra] Fix generate report when has isolation test result (NVIDIA#8861) Signed-off-by: qqiao <qqiao@nvidia.com> Signed-off-by: Emma Qiao <qqiao@nvidia.com> [https://nvbugs/5685015][fix] Update invalid max_token test (NVIDIA#9435) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][fix] Fix on-disk cache and revise logger/statistics for AutoTuner. (NVIDIA#9211) Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [https://nvbugs/5689658][test] Fix gpu lock issue running on cluster (NVIDIA#9441) Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com> [None][chore] add spec_decoding configs in perf benchmark scripts and fix typos (NVIDIA#9533) Signed-off-by: Lanyu Liao <lancelly@users.noreply.github.com> Co-authored-by: Lanyu Liao <lancelly@users.noreply.github.com> [None][fix] Remove FP8 K/V buffer from TRTLLM sparse MLA attention kernel (NVIDIA#9529) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [None] [chore] Enhancements and clean up to slurm scripts (NVIDIA#9493) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [None][chore] Revert "[None][fix] change allreduce workspace dtype to torch.int64 t… (NVIDIA#9538) Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com> [None][infra] Waive failed cases for main branch on 11/28 (NVIDIA#9539) Signed-off-by: qqiao <qqiao@nvidia.com> [None][fix] Pass checkpoint_format to create_input_processor (NVIDIA#9521) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [TRTLLM-9541][infra] Use artifactory mirror for download.pytorch.org (NVIDIA#9477) Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com> Signed-off-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [TRTLLM-9488][feat] add 'disable_flashinfer_sampling' config option (NVIDIA#9454) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [None][infra] Waive failed case in pre-merge on 11/28 (NVIDIA#9537) Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com> [None][perf] Helix: improve all-to-all perf for large CP size (NVIDIA#9494) Signed-off-by: Matthias Jouanneaux <mjoux@nvidia.com> Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com> Co-authored-by: Zheyu Fu <zheyuf@nvidia.com> [None][feat] support for more accurate AR calculation (NVIDIA#9323) Signed-off-by: binghanc <176802681+binghanc@users.noreply.github.com> [TRTLLM-9488][fix] llmapi references (NVIDIA#9547) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [NVIDIA#8948][feat] Support custom sharding config (NVIDIA#9143) Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][chore] Weekly mass integration of release/1.1 -- rebase (NVIDIA#9522) Signed-off-by: yunruis <205571022+yunruis@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com> Signed-off-by: qgai <qgai@nvidia.com> Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> Signed-off-by: Simeng Liu <simengl@nvidia.com> Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com> Signed-off-by: Vincent Zhang <vinczhang@nvidia.com> Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com> Signed-off-by: Michal Guzek <mguzek@nvidia.com> Signed-off-by: Michal Guzek <moraxu@users.noreply.github.com> Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> Signed-off-by: leslie-fang25 <leslief@nvidia.com> Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co> Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Co-authored-by: yunruis <205571022+yunruis@users.noreply.github.com> Co-authored-by: sunnyqgg <159101675+sunnyqgg@users.noreply.github.com> Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com> Co-authored-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Co-authored-by: JunyiXu-nv <219237550+JunyiXu-nv@users.noreply.github.com> Co-authored-by: Simeng Liu <109828133+SimengLiu-nv@users.noreply.github.com> Co-authored-by: Guoming Zhang <137257613+nv-guomingz@users.noreply.github.com> Co-authored-by: Jin Li <59594262+liji-nv@users.noreply.github.com> Co-authored-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com> Co-authored-by: Vincent Zhang <vcheungyi@163.com> Co-authored-by: peaceh-nv <103117813+peaceh-nv@users.noreply.github.com> Co-authored-by: Michal Guzek <moraxu@users.noreply.github.com> Co-authored-by: Chang Liu <9713593+chang-l@users.noreply.github.com> Co-authored-by: Leslie Fang <leslief@nvidia.com> Co-authored-by: Shunkangz <182541032+Shunkangz@users.noreply.github.com> Co-authored-by: Shunkang <182541032+Shunkangz@users.noreply.github.co> Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com> [TRTLLM-5971][feat] Integrate helix parallelism (NVIDIA#9342) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][infra] - Request idle time exemption for OCI jobs (NVIDIA#9528) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [None][infra] Wiave failed tests for main branch on 11/30 (NVIDIA#9555) Signed-off-by: qqiao <qqiao@nvidia.com> [None][fix] Fix port conflict in disagg tests (NVIDIA#9474) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9558) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9559) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [TRTLLM-8958][feat] and [TRTLLM-8960]: create ConfigurableMoE and support TRTLLMGenFusedMoE as backend (NVIDIA#9486) [None] [feat] Optimize the algorithm part of RocketKV (NVIDIA#9333) Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com> [https://nvbugs/5690172][fix] Fix Qwen3-235B ATP accuracy issue with PDL (NVIDIA#9530) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [TRTLLM-6222][feat] Extend cute_dsl_nvfp4_gemm to sm103. (NVIDIA#9543) Signed-off-by: Mindy Li <11663212+limin2021@users.noreply.github.com> [None][fix] Correct virtual memory allocation alignment (NVIDIA#9491) Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [https://nvbugs/5684703][fix] Unwaive disagg guided decoding test (NVIDIA#9466) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [https://nvbugs/5503479][fix] Temporarily lower reference accuracy to stabilize CI (NVIDIA#9398) Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com> [None][chore] remove qwen3-next accuracy tests (NVIDIA#9534) Signed-off-by: jiant <107457950+JadoTu@users.noreply.github.com> [None][doc] fix mtp.py typo (NVIDIA#9307) Signed-off-by: liugaoji <757394026@qq.com> [None][feat] add chat template kwargs support to longbench-v2 (NVIDIA#9544) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [NVIDIA#9496][fix] AutoDeploy: remove auto-tuner from nvfp4_gemm forward (NVIDIA#9497) Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com> [None][fix] Replace hash method with unique_id for cutedsl MoE runners. (NVIDIA#9569) Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][chore] refactor disaggregated scripts to use named arguments (NVIDIA#9581) Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com> [TRTLLM-6222][feat] Several perf opt for cuteDSL nvf4 gemm (NVIDIA#9428) Signed-off-by: Yuhan Li <51736452+liyuhannnnn@users.noreply.github.com> [None][chore] reduce the layers of the `devel` docker image (NVIDIA#9077) Signed-off-by: Martin Marciniszyn Mehringer <11665257+MartinMarciniszyn@users.noreply.github.com> [https://nvbugs/5651854][infra] Enable perf metrics during accuracy testing (NVIDIA#9140) [None][fix] Skip Allreduce init for Attention DP (NVIDIA#9542) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [None][test] [None][test] Waive main branch test failures 12/1 (NVIDIA#9566) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [None][ci] Minor change for Slurm scripts (NVIDIA#9561) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [TRTLLM-6768][infra] Fix params for not updating github status (NVIDIA#6747) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][infra] Update the pytest options after MI (NVIDIA#9579) Signed-off-by: qqiao <qqiao@nvidia.com> [TRTLLM-6756][feat] Add Beam Search to TorchSampler (NVIDIA#8509) Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com> [None][chore] Defer exposing context parallel configs (NVIDIA#9552) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [TRTC-1943][feat] Env vars override support in LLM API (NVIDIA#9104) Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com> [None][feat] AutoDeploy: Use the router gemm op for nemotron MOE (NVIDIA#9500) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> [NVIDIA#9198][feat] Refactor dist ops in AutoDeploy (NVIDIA#9301) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [None][fix] Prevent YAML partial kv_cache_config from incorrectly overriding the complete kv_cache_config (NVIDIA#9262) Signed-off-by: Yuening Li <62227368+Yuening-wa@users.noreply.github.com> [TRTLLM-9085][doc] fix math formula rendering issues in github (NVIDIA#9605) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [None][feat] Unify nvfp4 gemm backend (NVIDIA#8963) Signed-off-by: Shijie Wang <jaywan@nvidia.com> Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> Signed-off-by: Shijie <jaywan@nvidia.com> Co-authored-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][feat] Add support for KVCache reuse for DSv32 (NVIDIA#9383) Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][chroe] Polish qwen3-next modeling code. (NVIDIA#8902) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [https://nvbugs/5703953][fix] Use random port for disagg tests (NVIDIA#9582) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][fix] Waive gb200 (NVIDIA#9580) Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com> [FMDL-1328][feat] Add support for nano-v3 and super-v3 with pytorch backend (NVIDIA#9261) Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> [https://nvbugs/5582091][test] increase warmup times in testing for multi-gpu cases (NVIDIA#9578) Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com> [None][chore] Add failed cases into waives.txt (NVIDIA#9588) Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> [https://nvbugs/5702793][fix] Fix uncontiguous tensor view (NVIDIA#9576) Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com> [None][infra] Waive failed cases for main branch (NVIDIA#9615) Signed-off-by: qqiao <qqiao@nvidia.com> [TRTLLM-9488][feat] use FlashInfer.sampling by default (NVIDIA#9545) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [None][infra] Update allowlist 2025/12/01 (NVIDIA#9616) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][infra] Remove an invalid test name in waives.txt (NVIDIA#9620) Signed-off-by: qqiao <qqiao@nvidia.com> Lock the gpu clocks in L0 perf tests (NVIDIA#9585) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [TRTLLM-9466][test] Evaluate helix parallelism with DSV3 Lite (NVIDIA#9597) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [None][fix] Extract GPU count from single-node stage names (NVIDIA#9599) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [https://nvbugs/5667774][fix] Refine Piecewise Cuda Graph Condition for DP (NVIDIA#9393) Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> [TRTLLM-9144][fix] enhance RPC robustness (NVIDIA#8711) Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Co-authored-by: Erin Ho <14718778+hchings@users.noreply.github.com> [https://nvbugs/5627710][fix] Fix synchronization bugs in KvCacheTransferManager that can cause corrupted blocks (NVIDIA#9056) Signed-off-by: thorjohnsen <41591019+thorjohnsen@users.noreply.github.com> Signed-off-by: Thor Johnsen <41591019+thorjohnsen@users.noreply.github.com> Co-authored-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [TRTLLM-8980][test] Clean up spec dec tests in test_llm_api_pytorch (NVIDIA#8889) Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [NVIDIA#9150][feat] Add code for nano v3 to custom implementation in AD (NVIDIA#9465) * Why? We would like to show an alternative to monkey-patching in AutoDeploy. * What? This commit builds on the existing custom model implementation for NemotronH and adds the bits relevant for MoE layers. Part of NVIDIA#9150. Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com> [NVIDIA#9150][feat] AutoDeploy: reviewer comments for NVIDIA#9150 (NVIDIA#9527) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [https://nvbugs/5651854][fix] Fix dist-serving perf by clearing CPU affinity (NVIDIA#9549) Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> [NVIDIA#9550][feat] AutoDeploy: Add NVFP4 Cutlass MoE kernels (NVIDIA#9551) Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com> [https://nvbugs/5688388][fix] fix: Reducing num request in disagg test to speed up (NVIDIA#9598) Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com> [TRTLLM-8946][feat] Improved heuristics to detect shardable regions (NVIDIA#9200) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com> Co-authored-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [NVIDIA#9632][feat] Support EXTRA_WHEEL_BUILD_ARGS during wheel build (NVIDIA#9633) Signed-off-by: Yu Chi Li <yuchil@nvidia.com> [None][chore] Waive test failing on pre-merge (NVIDIA#9638) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [None][chore] Remove traceback dump for multimodal input processor (NVIDIA#9634) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [None][chore] Fix trtllm-eval and move GroupedGemmInputsHelper (NVIDIA#9612) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [https://nvbugs/5698434][fix] Use separate weight mapper for draft (NVIDIA#9607) Signed-off-by: Anurag Mukkara <134339030+amukkara@users.noreply.github.com> [TRTLLM-7101][infra] Reuse passed tests (NVIDIA#6894) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [None][test] Remove duplicate test cases (NVIDIA#9623) Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][feat] Add RocketKV usage doc and e2e accuracy test on LongBenchV2 (NVIDIA#9572) Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com> [TRTLLM-9242][doc] Add examples showcasing openai compatible APIs (NVIDIA#9520) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][chore] AutoDeploy update cuda stream manager for multi-device (NVIDIA#9575) Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> [TRTLLM-9391][chore] Automatically estimate required workspace. (NVIDIA#9535) Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com> [https://nvbugs/5708475][fix] Fix e2e eval accuracy for helix parallelism (NVIDIA#9647) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [https://nvbugs/5561153][test] Fix log error for perf test (NVIDIA#9622) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [TRTLLM-8241][feat] Aliasing to comply to LlmArgs (NVIDIA#9586) Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> [None][chore] Add failed cases into waives.txt (NVIDIA#9593) Signed-off-by: Jie Li <lijie@nvidia.com> Co-authored-by: Jie Li <lijie@nvidia.com> [TRTLLM-6842][feat] Support Response API for general purpose (NVIDIA#9392) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][test] Update Qwen3-next accuracy testing by setting the cuda … (NVIDIA#9613) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [None][feat] update trtllm-gen nvfp4 kernels with better performance (NVIDIA#9510) Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com> [None][doc] Replace the tensorrt icon with torch icon on overview.md (NVIDIA#9644) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [https://nvbugs/5705197][chore] Unwaive timeout disagg tests (NVIDIA#9637) Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com> [https://nvbugs/5552132][fix] Enable LoRa for GPT OSS Torch (NVIDIA#8253) Signed-off-by: Michal Guzek <mguzek@nvidia.com> [None][fix] Fix wide ep MoE error (NVIDIA#9642) Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> [https://nvbugs/5702795][fix] Remove the warning message for aten.log. (NVIDIA#9665) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [https://nvbugs/5693853][fix] Fix error handling when querying machin… (NVIDIA#9483) Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com> [OMNIML-2932] [feat] nvfp4 awq support (NVIDIA#8698) Signed-off-by: weimingc <17592131+meenchen@users.noreply.github.com> [NVIDIA#9643][fix] AutoDeploy: fix nano sharding config (NVIDIA#9668) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [NVIDIA#9147][feat] AutoDeploy: Draft Target Speculative Decoding (NVIDIA#9275) Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com> [None][feat] Update Qwen3CodeToolParser to align tool-calling parameters (NVIDIA#9540) Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> [TRTLLM-7181][infra] Generate test results when pytest timeout happens (NVIDIA#9396) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-9522][fix] restore `trtllm-serve mm_embedding_serve` (NVIDIA#9669) [TRTLLM-5093][infra] Write env variables to a file in the interactive debug session (NVIDIA#6792) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][fix] fix error when processing batches containing both text and mm data (NVIDIA#8381) Signed-off-by: Nekofish-L <liuxiangyang@mail.ustc.edu.cn> [TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (NVIDIA#7838) Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> [None][feat] Add weights initialization and context phase parser to layer-wise benchmarks (NVIDIA#9667) Signed-off-by: Tailing Yuan <yuantailing@gmail.com> [TRTLLM-8274][feat] Check if executor is shutdown in /health entrypoint (NVIDIA#9057) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [NVIDIA#8733][feat] Add Llama4 MoE handling to AutoDeploy (NVIDIA#9556) Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com> Signed-off-by: tcherckez-nvidia <127761168+tcherckez-nvidia@users.noreply.github.com> Co-authored-by: Neta Zmora <nzmora@nvidia.com> [None][ci] unwaive tests (NVIDIA#9651) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> [None][feat] Add NIXL-LIBFABRIC support (NVIDIA#9225) Signed-off-by: Yoray Zack <62789610+zackyoray@users.noreply.github.com> Signed-off-by: zackyoray <yorayz@nvidia.com> [None][test] rename wide ep and disagg metric name in perf test (NVIDIA#9704) Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com> [https://nvbugs/5467531][fix] Unwaive fused_moe all to all test with … (NVIDIA#9617) Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> [None][fix] Recover TRTLLM MoE Perf for DEP (NVIDIA#9562) Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com> [None][chore] Add failed cases into waives.txt (NVIDIA#9662) Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [None][fix] Fix TLLM_SPEC_DECODE_FORCE_NUM_ACCEPTED_TOKENS for MTP/EAGLE (NVIDIA#9608) Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com> [None][infra] Add container notices and documentation (NVIDIA#9185) Signed-off-by: Parker Drake <pdrake@nvidia.com> [TRTLLM-5312][infra] Add triton trigger rules (NVIDIA#6440) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][doc] Add feature docs for helix parallelism (NVIDIA#9684) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [TRTLLM-9579][infra] Set mergeWaiveList stage UNSTABLE when there is any issue (NVIDIA#9692) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][doc] Added line about partial reuse (NVIDIA#7846) Signed-off-by: thorjohnsen <41591019+thorjohnsen@users.noreply.github.com> [TRTLLM-8920][feat] decouple disagg service from fastapi (NVIDIA#8714) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> [https://nvbugs/5633340][fix] start disagg workers and servers on free ports (NVIDIA#9694) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> [TRTLLM-9562] [doc] Add Deployment Guide for Kimi K2 Thinking on TensorRT LLM - Blackwell (NVIDIA#9711) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [NVIDIA#9602][feat] AutoDeploy: Support TRTLLM Sampler (NVIDIA#9641) Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None] [tests] Unwaive EPLB tests (NVIDIA#9625) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5518713][test] Refactor core test lists by merging with llm_perf_cluster.yml (NVIDIA#9714) Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com> [TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (NVIDIA#9583) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [None][refactor] Improve request processing function in sampler (NVIDIA#9671) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [https://nvbugs/5670672][fix] Fix flaky KV connector tests (NVIDIA#9676) Signed-off-by: jthomson04 <jwillthomson19@gmail.com> [None][infra] Update allowed list 20251204 (NVIDIA#9718) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][feat] AutoDeploy: Perf optimization for Attention and rmsnorm (NVIDIA#9719) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> [None][chore] Waive flakey disagg tests (NVIDIA#9749) Signed-off-by: Mike Iovine <miovine@nvidia.com> [https://nvbugs/5601682][fix] Fix cacheTransceiver hang (NVIDIA#9311) Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9199][docs] KV Connector Docs (NVIDIA#9325) Signed-off-by: jthomson04 <jwillthomson19@gmail.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9160][doc] add doc to llm_runtime.py (NVIDIA#9482) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [None][doc] VDR 1.0 trtllm-serve doc enhancement (NVIDIA#9443) Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9086][doc] Clean up TODOs in documentation (NVIDIA#9292) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9157][doc] Guided decoding doc improvement (NVIDIA#9359) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [None][infra] Updated Linux installation guide (NVIDIA#9485) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9075][doc] refine the slurm examples (NVIDIA#9548) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9093][doc] update hyper links in overview (NVIDIA#9568) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9092][doc] link to modelopt checkpoints in quick start guide (NVIDIA#9571) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][fix] Fix triton moe load_weight (NVIDIA#9649) Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com> [None][fix] fix a bug: deepseek_fp8_block_scales in TRTLLMGEN-MoE use 2D x_sf instead of 1D (NVIDIA#9658) Signed-off-by: xxi <xxi@nvidia.com> [TRTLLM-9372][feat] Enable CuteDSL MoE with Large EP (NVIDIA#9592) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [TRTLLM-9522][chore] implement default `attach_multimodal_embeddings` (NVIDIA#9664) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [TRTLLM-9660][feat] Convert cuteDSL GEMM to opt-in feature (NVIDIA#9682) Signed-off-by: Jonas Li <6110159+longlee0622@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [None][fix] enable hmac in RPC (NVIDIA#9745) Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [https://nvbugs/5703953][fix] Preserving ip:port for trtllm-serve before initializing llm (NVIDIA#9646) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][infra] Waive failed cases for main branch on 12/07 (NVIDIA#9769) Signed-off-by: qqiao <qqiao@nvidia.com> [None][fix] Several minor fixes to CI setting (NVIDIA#9765) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (NVIDIA#9679) Signed-off-by: Chenjie Luo <chenjiel@nvidia.com> [None][feat] Enable NCCL_SYMMETRIC as default fallback for AllReduce (NVIDIA#9314) Signed-off-by: Ludwig Schneider <lschneider@nvidia.com> [TRTLLM-9000][feat] Add multi-node Perf Tests into CI (NVIDIA#8800) Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> [None][test] add ntp tolerance in time metrics verification (NVIDIA#9741) Signed-off-by: zhengd-nv <200704041+zhengd-nv@users.noreply.github.com> [TRTLLM-9603][feat] Enable ConfigurableMoE test in the CI (NVIDIA#9645) [https://nvbugs/5422621][test] Add GB 200 WIDEEP test case for RCCA 5422621 (NVIDIA#9506) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [None][fix] Fix two tuning cache miss issues. (NVIDIA#9743) Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-9706] [doc] Update wide EP documents (NVIDIA#9724) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5666804][test] only adding sampler config for limited models (NVIDIA#9512) Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: yufeiwu-nv <230315618+yufeiwu-nv@users.noreply.github.com> Co-authored-by: Larry Xu <197874197+LarryXFly@users.noreply.github.com> [None][infra] Waive failed cases for main on 12/08 (NVIDIA#9773) Signed-off-by: qqiao <qqiao@nvidia.com> [None][chore] Move the rocketkv e2e test to post-merge (NVIDIA#9768) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [None][chore] Enable tvm_ffi for cute dsl nvfp4_gemm to reduce host overhead. (NVIDIA#9690) Signed-off-by: Mindy Li <11663212+limin2021@users.noreply.github.com> [TRTLLM-9431][perf] Enable multistream for Linear Attention in Qwen3-… (NVIDIA#9696) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [None][chore] Remove closed bugs (NVIDIA#9770) Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> [None][infra] update mooncake in docker images (NVIDIA#9584) Signed-off-by: zhengd-nv <200704041+zhengd-nv@users.noreply.github.com> Signed-off-by: Zheng Duan <200704041+zhengd-nv@users.noreply.github.com> [None][test] Add Kimi k2 WIDEEP perf and accuracy cases (NVIDIA#9686) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5527655][test] Add test case for RCCA 5527655 (NVIDIA#9511) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [http://nvbugs/5649010][fix] fix test_auto_scaling.py::test_worker_restart timeout (NVIDIA#9775) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> [None][fix] Switch AutoDeploy's default allreduce strategy to NCCL (NVIDIA#9666) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [TRTLLM-9506][fix] Fix AR for DeepSeek-R1 2 model path (NVIDIA#9661) Signed-off-by: qgai <qgai@nvidia.com> ray + updatew works trtllm works in async env trtllm works in sync and async env ray + updatew works rebase to the updated verl server mode still cherry pick still cherry pick still cherry pick integrated http interface hang at RyExecutor create workers ray.remote clean code use tensorrt_llm.rlhf_utils Signed-off-by: Liwei Ma <liweim@nvidia.com> placement, asyncllm, and basic tests Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> connect sleep and wakeup; Add support to pass None to update_weights Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Batching ctx for IFB scheduler Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> accuracy WAR for TP>1: always use AllReduceStrategy.NCCL, refactored Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> fix e2e integration Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> update asyncllm, other nits Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> fix init setup Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Fix TRTLLMSampler logprobs perf Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> fix and cleanup Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> fix server Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Revert "Batching ctx for IFB scheduler" This reverts commit b51aac0 Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> update & address comments Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
…VIDIA#9314) Signed-off-by: Ludwig Schneider <lschneider@nvidia.com>
…VIDIA#9314) Signed-off-by: Ludwig Schneider <lschneider@nvidia.com>
…VIDIA#9314) Signed-off-by: Ludwig Schneider <lschneider@nvidia.com>
Summary by CodeRabbit
New Features
Bug Fixes
Deprecations
Tests
Description
Background and Motivation
Currently, when AllReduce operations encounter conditions that prevent the use of optimized strategies (e.g., large message sizes, missing P2P support, or out-of-bounds lookup table entries), the system falls back to the
NCCLstrategy. This PR introduces a new NCCL window tensor infrastructure and changes all default fallback paths to useNCCL_SYMMETRICinstead, which provides better performance characteristics through window tensor registration and improved buffer management.The
NCCL_SYMMETRICstrategy leverages NCCL window tensors for efficient buffer reuse and registration, making it a more suitable default fallback than the basicNCCLstrategy. This PR includes the infrastructure necessary to support window tensor operations.Summary of Changes
This PR consists of two main components:
1. New NCCL Window Tensor Infrastructure
Introduces a new NCCL utilities system (
ncclUtils.cpp/h) that provides:NCCLWindowAllocator: Manages NCCL window-registered buffers with pooling and automatic cleanup. Buffers are tied to the lifetime of their associated NCCL communicator, enabling efficient buffer reuse across multiple AllReduce operations.NcclCommResourceManager: Thread-safe singleton that manages resources associated with NCCL communicators. Ensures proper cleanup of window buffers and other resources before communicator destruction.NCCLHelper: Dynamic library loading for NCCL symbols (ncclCommWindowRegister,ncclMemAlloc), allowing graceful handling of NCCL versions with or without window support.createNCCLWindowTensor: Helper function to create PyTorch tensors backed by NCCL window-registered buffers.This infrastructure decouples the NCCL Window allocation from the UB tensor allocation mechanism.
2. Default Fallback Strategy Changes
Systematically updates all fallback paths in the AllReduce strategy selection logic to use
NCCL_SYMMETRICinstead ofNCCL:allreduceOp.cpp,allreducePlugin.cpp):selectImplementation()methods to returnNCCL_SYMMETRICfor all fallback conditionsrunNCCLAllReduceSymmetric()now uses the newNCCLWindowAllocatorinstead of UB allocatorcustomAllReduceUtils.h): Updated lookup table fallback andSelectStrategyLP()to returnNCCL_SYMMETRICfunctional.py): Updated workspace fallback logic to useNCCL_SYMMETRICallreduce_heuristic_code_gen.py): Updated default lookup table initialization to useNCCL_SYMMETRICncclUtilsTest.cpp) for the new infrastructure:NCCLWindowAllocatorbuffer allocation, reuse, and cleanupNcclCommResourceManagerresource registration and cleanuptest_window_tensor.py) covering:Files Changed
New Infrastructure
cpp/tensorrt_llm/common/ncclUtils.h- NEW header for NCCL utilities (window allocator, resource manager, helper)cpp/tensorrt_llm/common/ncclUtils.cpp- NEW implementation of NCCL utilitiescpp/tensorrt_llm/thop/ncclWindowTensor.h- NEW header for PyTorch window tensor creationcpp/tensorrt_llm/thop/ncclWindowTensor.cpp- NEW implementation of window tensor creationCore Implementation Updates
cpp/tensorrt_llm/thop/allreduceOp.cpp- Updated fallback returns, integratedNCCLWindowAllocator, replaced UB allocatorcpp/tensorrt_llm/plugins/ncclPlugin/allreducePlugin.cpp- Updated plugin fallback logic and loggingcpp/tensorrt_llm/common/customAllReduceUtils.h- Updated lookup table fallbacktensorrt_llm/functional.py- Updated Python fallback logictests/scripts/allreduce_perf/allreduce_heuristic_code_gen.py- Updated default initializationPerformance Impact
NCCL_SYMMETRICuses window tensor registration for better buffer reuse, potentially improving performance for repeated AllReduce operationsNCCLWindowAllocatorprovides efficient buffer pooling with best-fit allocation strategyNCCLNCCL_SYMMETRICis already a tested and supported strategy, and the new infrastructure includes comprehensive testsFunctional Impact
NCCLtoNCCL_SYMMETRICNCCL_SYMMETRICnow uses NCCL window tensors instead of UserBuffers allocatorTLLM_NCCL_MIN_REGISTRATIONenv var)NCCLstrategy are unaffected; only AUTO mode and fallback paths are changedTest Coverage
C++ Unit Tests (
ncclUtilsTest.cpp)NCCLWindowAllocatorTests:BasicAllocation- Verifies buffer allocation and registrationBufferReuse- Tests buffer pooling and reuseBestFitReuse- Verifies best-fit allocation strategyMultipleBuffers- Tests concurrent buffer managementBufferSearch- Verifies buffer lookup functionalityCleanupOnCommDestroy- Ensures proper cleanup when communicator is destroyedMultiCommIsolation- Verifies buffers are isolated per communicatorNcclCommResourceManagerTests:ResourceRegistration- Tests resource registration and countingResourceCleanup- Verifies cleanup order and exception handlingThreadSafety- Tests concurrent resource registrationMultiCommResources- Tests resource isolation per communicatorExisting Tests
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.