[opt](ann index) Make chunk size of index train configurable#58645
Merged
airborne12 merged 1 commit intoapache:masterfrom Dec 4, 2025
Merged
[opt](ann index) Make chunk size of index train configurable#58645airborne12 merged 1 commit intoapache:masterfrom
airborne12 merged 1 commit intoapache:masterfrom
Conversation
Contributor
|
Thank you for your contribution to Apache Doris. Please clearly describe your PR:
|
Contributor
Author
|
run buildall |
TPC-H: Total hot run time: 34684 ms |
TPC-DS: Total hot run time: 182110 ms |
ClickBench: Total hot run time: 27.28 s |
BE UT Coverage ReportIncrement line coverage Increment coverage report
|
Contributor
BE Regression && UT Coverage ReportIncrement line coverage Increment coverage report
|
a65427f to
20d2e9a
Compare
Contributor
Author
|
run buildall |
TPC-H: Total hot run time: 35173 ms |
TPC-DS: Total hot run time: 182180 ms |
ClickBench: Total hot run time: 27.46 s |
Contributor
BE UT Coverage ReportIncrement line coverage Increment coverage report
|
Contributor
|
PR approved by at least one committer and no changes requested. |
Contributor
|
PR approved by anyone and no changes requested. |
Contributor
BE Regression && UT Coverage ReportIncrement line coverage Increment coverage report
|
github-actions bot
pushed a commit
that referenced
this pull request
Dec 4, 2025
### What problem does this PR solve? Previous pr: #57623 The current granularity for index training and data ingestion is set to 1M and is hard-coded, which makes index construction unnecessarily slow in some scenarios. This should be made configurable and reduced when appropriate. For example, when having 1M vectors to add, and batch size of stream load is set to 0.3M, this means we will have 3 stream load requests. If it happens to make one request that having 0.3M to have 1 threads for adding, whole process of load will be very slow. A typical cpu usage will be like this: <img width="1902" height="552" alt="image" src="https://github.com/user-attachments/assets/65728e56-f333-4bd5-a54a-8c12d01668f1" /> We need to make batch size configurable so that we can modify them when we need to do it. For example, when we set batch size to 30K, we can have a more higher avg cpu usage when we like this: <img width="1890" height="554" alt="image" src="https://github.com/user-attachments/assets/7d664b0e-b017-4a2e-bed8-e40f56ff97b7" /> **Default value is still 1M, small batch size will do a damage to the recall of the hnsw.**
nagisa-kunhah
pushed a commit
to nagisa-kunhah/doris
that referenced
this pull request
Dec 14, 2025
…58645) ### What problem does this PR solve? Previous pr: apache#57623 The current granularity for index training and data ingestion is set to 1M and is hard-coded, which makes index construction unnecessarily slow in some scenarios. This should be made configurable and reduced when appropriate. For example, when having 1M vectors to add, and batch size of stream load is set to 0.3M, this means we will have 3 stream load requests. If it happens to make one request that having 0.3M to have 1 threads for adding, whole process of load will be very slow. A typical cpu usage will be like this: <img width="1902" height="552" alt="image" src="https://github.com/user-attachments/assets/65728e56-f333-4bd5-a54a-8c12d01668f1" /> We need to make batch size configurable so that we can modify them when we need to do it. For example, when we set batch size to 30K, we can have a more higher avg cpu usage when we like this: <img width="1890" height="554" alt="image" src="https://github.com/user-attachments/assets/7d664b0e-b017-4a2e-bed8-e40f56ff97b7" /> **Default value is still 1M, small batch size will do a damage to the recall of the hnsw.**
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What problem does this PR solve?
Previous pr: #57623
The current granularity for index training and data ingestion is set to 1M and is hard-coded, which makes index construction unnecessarily slow in some scenarios. This should be made configurable and reduced when appropriate.
For example, when having 1M vectors to add, and batch size of stream load is set to 0.3M, this means we will have 3 stream load requests. If it happens to make one request that having 0.3M to have 1 threads for adding, whole process of load will be very slow. A typical cpu usage will be like this:

We need to make batch size configurable so that we can modify them when we need to do it.
For example, when we set batch size to 30K, we can have a more higher avg cpu usage when we like this:

Default value is still 1M, small batch size will do a damage to the recall of the hnsw.
Release note
None
Check List (For Author)
Test
Behavior changed:
Does this need documentation?
Check List (For Reviewer who merge this PR)