Skip to content

Comments

[TRTLLM-10030][perf] pin host memory and batch sampler setup in beam search#11390

Merged
ixlmar merged 1 commit intoNVIDIA:mainfrom
ixlmar:perf/beam-search-4
Feb 10, 2026
Merged

[TRTLLM-10030][perf] pin host memory and batch sampler setup in beam search#11390
ixlmar merged 1 commit intoNVIDIA:mainfrom
ixlmar:perf/beam-search-4

Conversation

@ixlmar
Copy link
Collaborator

@ixlmar ixlmar commented Feb 9, 2026

Description

Addresses minor issues spotted during profiling.

For short output sequence lengths, the TorchSampler beam search is 43 % faster than TrtllmSampler. Perf is largely unchanged, but change may unlock future optimization potential.

Test Coverage

n/a

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Refactor
    • Optimized beam search execution with improved memory management and batched buffer initialization for enhanced performance and efficiency.

@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 9, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35353 [ run ] triggered by Bot. Commit: 974017d

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35353 [ run ] completed with state FAILURE. Commit: 974017d
/LLM/main/L0_MergeRequest_PR pipeline #27302 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 9, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35354 [ run ] triggered by Bot. Commit: 974017d

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35354 [ ] completed with state FAILURE. Commit: 974017d

@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 9, 2026

/bot run

2 similar comments
@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 9, 2026

/bot run

@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 9, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35357 [ run ] triggered by Bot. Commit: 974017d

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35357 [ run ] completed with state FAILURE. Commit: 974017d
/LLM/main/L0_MergeRequest_PR pipeline #27306 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
@ixlmar ixlmar force-pushed the perf/beam-search-4 branch from 974017d to a40b26b Compare February 10, 2026 09:54
@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 10, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35488 [ run ] triggered by Bot. Commit: a40b26b

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35488 [ run ] completed with state SUCCESS. Commit: a40b26b
/LLM/main/L0_MergeRequest_PR pipeline #27404 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 10, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35509 [ run ] triggered by Bot. Commit: a40b26b

@ixlmar ixlmar requested review from Funatiq and stnie February 10, 2026 14:49
@ixlmar ixlmar marked this pull request as ready for review February 10, 2026 14:49
@ixlmar ixlmar requested a review from a team as a code owner February 10, 2026 14:49
@tensorrt-cicd
Copy link
Collaborator

PR_Github #35509 [ run ] completed with state SUCCESS. Commit: a40b26b
/LLM/main/L0_MergeRequest_PR pipeline #27416 completed with status: 'SUCCESS'

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 10, 2026

📝 Walkthrough

Walkthrough

The sampler's beam search preparation logic is refactored to use batched, index-based buffer initialization with sequence slots and prompt lengths instead of per-request iteration. Memory pinning is added to certain tensors for improved data movement efficiency.

Changes

Cohort / File(s) Summary
Beam Search Preparation Refactoring
tensorrt_llm/_torch/pyexecutor/sampler.py
Updated _prepare_beam_search signature from requests: list[LlmRequest] to seq_slots: torch.Tensor, prompt_lens: torch.Tensor. Refactored buffer initialization (cache_indirection, cum_log_probs, sampled_log_probs, sampled_log_prob_ranks, predecessor_beams, first_finish_reasons, original_tokens) to use batched, index-based approach. Enhanced setup_sampler_step to collect prompt lengths and conditionally invoke _prepare_beam_search. Added pin_memory=True to end_ids and full_list tensor allocations. Adjusted _process_requests to compute with 32-bit pinned seq_slots tensor.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive The description lacks critical details. It mentions 'minor issues spotted during profiling' but does not explain what those issues were or how the changes address them. Test Coverage is marked 'n/a' without justification. Expand the description to clearly explain which profiling issues were identified, why pinning memory and batching improve the situation, and justify why test coverage is not applicable here.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: pinning host memory and batching sampler setup in beam search, matching the code refactoring in the pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

No actionable comments were generated in the recent review. 🎉

🧹 Recent nitpick comments
tensorrt_llm/_torch/pyexecutor/sampler.py (2)

1580-1618: Use scalar broadcast assignment instead of allocating temporary tensors.

Each buffer reset allocates a new small CUDA tensor via torch.zeros(...). For a perf-focused change, scalar broadcast avoids these allocations entirely. PyTorch supports tensor[index] = 0 directly.

Additionally, the first_finish_reasons initialization (lines 1605–1613) is unnecessarily complex: since FinishReason.NOT_FINISHED.value is 0, the pin→transfer→unsqueeze chain is equivalent to assigning 0.

♻️ Proposed simplification
-        self.store.cache_indirection[seq_slots, :, prompt_lens] = torch.zeros(
-            (1, 1),
-            dtype=self.store.cache_indirection.dtype,
-            device=self.store.cache_indirection.device,
-        )
-        self.store.cum_log_probs[seq_slots] = torch.zeros(
-            (1,),
-            dtype=self.store.cum_log_probs.dtype,
-            device=self.store.cum_log_probs.device,
-        )
-        self.store.sampled_log_probs[seq_slots] = torch.zeros(
-            (1,),
-            dtype=self.store.sampled_log_probs.dtype,
-            device=self.store.sampled_log_probs.device,
-        )
-        self.store.sampled_log_prob_ranks[seq_slots] = torch.zeros(
-            (1,),
-            dtype=self.store.sampled_log_prob_ranks.dtype,
-            device=self.store.sampled_log_prob_ranks.device,
-        )
-        self.store.predecessor_beams[seq_slots] = torch.zeros(
-            (1,),
-            dtype=self.store.predecessor_beams.dtype,
-            device=self.store.predecessor_beams.device,
-        )
-        self.store.first_finish_reasons[seq_slots] = (
-            torch.tensor(
-                FinishReason.NOT_FINISHED.value,
-                pin_memory=True,
-                dtype=self.store.first_finish_reasons.dtype,
-            )
-            .to(self.store.first_finish_reasons.device, non_blocking=True)
-            .unsqueeze(0)
-        )
-        self.store.original_tokens[seq_slots] = torch.zeros(
-            (1,),
-            dtype=self.store.original_tokens.dtype,
-            device=self.store.original_tokens.device,
-        )
+        self.store.cache_indirection[seq_slots, :, prompt_lens] = 0
+        self.store.cum_log_probs[seq_slots] = 0
+        self.store.sampled_log_probs[seq_slots] = 0
+        self.store.sampled_log_prob_ranks[seq_slots] = 0
+        self.store.predecessor_beams[seq_slots] = 0
+        self.store.first_finish_reasons[seq_slots] = FinishReason.NOT_FINISHED.value
+        self.store.original_tokens[seq_slots] = 0

3266-3270: Consider clarifying the int64→int32 conversion intent.

The input seq_slots is already pin_memory=True (allocated at line 2218–2222 in sample_async), so the pin_memory=True here only adds the int32 dtype conversion benefit. The seq_slots_int64 variable is unused after the copy on line 3270—a brief comment explaining the downcast rationale (e.g., bandwidth savings) would help future readers.

Tip

We've launched Issue Planner and it is currently in beta. Please try it out and share your feedback on Discord!


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@ixlmar ixlmar merged commit 411fa9f into NVIDIA:main Feb 10, 2026
9 checks passed
@ixlmar ixlmar deleted the perf/beam-search-4 branch February 10, 2026 15:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants