Skip to content
This repository was archived by the owner on Jan 23, 2023. It is now read-only.

Conversation

@davidfowl
Copy link
Member

  • Use a 4K buffer instead of 2K (4K is a very common buffer size and usually maps to system page size)
  • Use a stack for the buffer segment pool and allow pooling more than 16 segments for large writes (up to 256 max)

- Use a 4K buffer instead of 2K (4 is a very common buffer size and usually maps to system page size)
- Use a stack for the buffer segment pool and allow pooling more than 16 segments for large writes
@benaadams
Copy link
Member

As an aside:

A pattern useful to maintain to reduce syscalls for Pipe for file reading (that Stream avoids by defaulting to a very large buffer), is using ReadFileScatter on Windows and readv, preadv on Linux to write to multiple segments at once.

This ends up being something like:

GetMemory, Advance, GetMemory, Advance, GetMemory, Advance, GetMemory, ReadGather, FlushAsync

Mentioning it as it doesn't work with the IBufferWriter<T>.Advance advice change dotnet/dotnet-api-docs#1949 (Advance invalidates previous span/memory)

However, its needed to implement IHttpSendFileFeature for Kestrel directly over the Body pipes without increasing the syscall count by about x 20 vs Stream (which has the disadvantage of an 85kB temp buffer alloc and secondary copy).

@JanEggers
Copy link

if it should be system page size why not use this as default?
https://docs.microsoft.com/en-us/dotnet/api/system.environment.systempagesize

@benaadams
Copy link
Member

if it should be system page size why not use this as default?

If you are using large pages then the page size can be 2MB to 256MB which would be an incredibility inefficient use of memory across multiple pipes.

@davidfowl davidfowl merged commit b49b512 into master Mar 11, 2019
@davidfowl davidfowl deleted the davidfowl/better-defaults branch March 11, 2019 14:34
private readonly PipeScheduler _writerScheduler;

private readonly BufferSegment[] _bufferSegmentPool;
private readonly Stack<BufferSegment> _bufferSegmentPool;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the reason for using the stack here? It's another allocation per pipe and it's 3 fields instead of 2 that we had before.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can live with another field. Also we saved on one the other PR so it’s a trade

Copy link

@pakrym pakrym Mar 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But what are we gaining by spending a field and an allocation?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clearer code and a way to resize the initial buffer up to the max without writing that code manually.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's literally 3 lines of code:

Array.Resize(ref _array, (_array.Length == 0) ? DefaultCapacity : 2 * _array.Length);
_array[_size] = item;
_version++;
_size++;

@pakrym
Copy link

pakrym commented Mar 11, 2019

How big of an improvement does this give? What scenario is driving this change?

@davidfowl
Copy link
Member Author

Recently testing out benchmarks with JSON tokens bigger than 32K and also the fact that pipelines is exposed in ASP.NET Core means you can buffer more (and without the hard limit). This makes it so that we don’t start allocating segments until you reach a 1MB of buffered data by default.

Also the 2K vs 4K by default felt like a better buffer size (we use it pretty much everywhere else).

{
internal const int SegmentPoolSize = 16;
internal const int InitialSegmentPoolSize = 16; // 65K
internal const int MaxPoolSize = 256; // 1MB
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: MaxSegmentPoolSize.

@karelz karelz added this to the 3.0 milestone Mar 18, 2019
picenka21 pushed a commit to picenka21/runtime that referenced this pull request Feb 18, 2022
- Use a 4K buffer instead of 2K (4K is a very common buffer size and usually maps to system page size)
- Use a stack for the buffer segment pool and allow pooling more than 16 segments for large writes



Commit migrated from dotnet/corefx@b49b512
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants