[#88240] [Ruby trunk Feature#14759] [PATCH] set M_ARENA_MAX for glibc malloc — sam.saffron@...
Issue #14759 has been updated by sam.saffron (Sam Saffron).
[#88251] Re: [ruby-alerts:8236] failure alert on trunk@P895 (NG (r64134)) — Eric Wong <normalperson@...>
[email protected] wrote:
[#88305] [Ruby trunk Bug#14968] [PATCH] io.c: make all pipes nonblocking by default — normalperson@...
Issue #14968 has been reported by normalperson (Eric Wong).
[#88331] [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid — samuel@...
Issue #13618 has been updated by ioquatix (Samuel Williams).
[#88342] [Ruby trunk Feature#14955] [PATCH] gc.c: use MADV_FREE to release most of the heap page body — ko1@...
Issue #14955 has been updated by ko1 (Koichi Sasada).
[#88433] [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid — ko1@...
SXNzdWUgIzEzNjE4IGhhcyBiZWVuIHVwZGF0ZWQgYnkga28xIChLb2ljaGkgU2FzYWRhKS4KCgpX
a28xQGF0ZG90Lm5ldCB3cm90ZToKPiBJc3N1ZSAjMTM2MTggaGFzIGJlZW4gdXBkYXRlZCBieSBr
[#88475] [Ruby trunk Misc#14937] [PATCH] thread_pthread: lazy-spawn timer-thread only on contention — ko1@...
Issue #14937 has been updated by ko1 (Koichi Sasada).
[#88491] Re: [ruby-cvs:71466] k0kubun:r64374 (trunk): test_function.rb: skip running test — Eric Wong <normalperson@...>
[email protected] wrote:
SSBzZWUuIFBsZWFzZSByZW1vdmUgdGhlIHRlc3QgaWYgdGhlIHRlc3QgaXMgdW5uZWNlc3Nhcnku
Takashi Kokubun <[email protected]> wrote:
[#88523] [Ruby trunk Bug#14999] ConditionVariable doesn't reacquire the Mutex if Thread#kill-ed — eregontp@...
Issue #14999 has been updated by Eregon (Benoit Daloze).
[email protected] wrote:
[#88549] [Ruby trunk Bug#14999] ConditionVariable doesn't reacquire the Mutex if Thread#kill-ed — eregontp@...
Issue #14999 has been updated by Eregon (Benoit Daloze).
[#88676] [Ruby trunk Misc#15014] thread.c: use rb_hrtime_scalar for high-resolution time operations — ko1@...
Issue #15014 has been updated by ko1 (Koichi Sasada).
[email protected] wrote:
On 2018/08/27 16:16, Eric Wong wrote:
[#88716] Re: [ruby-dev:43715] [Ruby 1.9 - Bug #595] Fiber ignores ensure clause — Eric Wong <normalperson@...>
Koichi Sasada wrote:
[#88723] [Ruby trunk Bug#15041] [PATCH] cont.c: set th->root_fiber to current fiber at fork — ko1@...
Issue #15041 has been updated by ko1 (Koichi Sasada).
[#88767] [Ruby trunk Bug#15050] GC after forking with fibers crashes — ko1@...
Issue #15050 has been updated by ko1 (Koichi Sasada).
Koichi Sasada <[email protected]> wrote:
Koichi Sasada <[email protected]> wrote:
[#88774] Re: [ruby-alerts:8955] failure alert on trunk@P895 (NG (r64594)) — Eric Wong <normalperson@...>
[email protected] wrote:
[ruby-core:88352] Re: [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid
[snipping some bits, because I can only speak to what I know] On Wed, 8 Aug 2018 at 18:50, Eric Wong <[email protected]> wrote: > > [email protected] wrote: > > > In particular, when handling HTTP/2 with multiple streams, > > it's tricky to get good performance because utilising multiple > > threads is basically impossible (and this applies to Ruby in > > general). With HTTP/1, multiple "streams" could be easily > > multiplexed across multiple processes easily. > > I'm no expert on HTTP/2, but I don't believe HTTP/2 was built > for high-throughput in mind. By "high-throughput", I mean > capable of maxing out the physical network or storage. > It was originally invented to reduce perceived latency, both in terms of time-to-first-paint and time-to-last-byte, in solitary servers as well as data centres and CDNs. As such throughput was definitely a goal, but not the only one. There is some synchronisation: the server has to read a few bytes of each frame it receives before it can demux them to independent handlers; and when transmitting you have to block for CONTINUATION frames if any are in progress, and for flow control if you're sending DATA. But aside from those bottlenecks, each request/response can be handled completely in parallel. Does that really have that big of an impact on throughput? > > At least, multiplexing multiple streams over a single TCP > connection doesn't make any sense as a way to improve > throughput. Rather, HTTP/2 was meant to reduce latency by > avoiding TCP connection setup overhead, and maybe avoiding > slow-start-after-idle (by having less idle time). In other > words, HTTP/2 aims to make better use of a > heavy-in-memory-but-often-idle resource. > It shouldn't be that hard to saturate your network card, if you've got enough data to write, and the other end can consume it fast enough. The single TCP connection and application-layer flow control is meant to avoid problems like congestion and bufferbloat, on top of reducing slow-start, TIME_WAIT, etc. so throughput should in theory be pretty high. I guess ramming it all into a single TLS stream doesn't help, as there is some fairly hefty overhead that necessarily runs in a single thread. I'd like to say that's why I argued so hard for <https://tools.ietf.org/html/rfc7540#section-3.2> to be included in the spec, but it's actually just coincidental. > > > What this means is that a single HTTP/2 connection, even with > > multiple streams, is limited to a single thread with the > > fiver-based/green-thread design. > > > I actually see two sids to this: It limits bad connections to > > a single thread, which is actually a feature in some ways. On > > the other hand, you can't completely depend on multiplexing > > HTTP/2 streams to improve performance. > > Right. > > > On the other hand, any green-thread based design is probably > > going to suffer from this problem, unless a work pool is used > > for actually generating responses. In the case of > > `async-http`, it exposes streaming requests and responses, so > > this isn't very easy to achieve. > Hmm, I think that's what I just said. But then, horses for courses -- if a protocol is designed one way, and an application is designed another, there won't be a great mesh. > > Exactly. As I've been say all aalong: use different concurrency > primitives for different things. fork (or Guilds) for > CPU/memory-bound processing; green threads and/or nonblocking > I/O for low-throughput transfers (virtually all public Internet > stuff), native Threads for high-throughput transfers > (local/LAN/LFN). > > So you could use a green thread to coordinate work to the work > pool (forked processes), and still use a green thread to serialize > the low-throughput response back to the client. > > This is also why it's desirable (but not a priority) to be able > to migrate green-threads to different Threads/Guilds for load > balancing. Different stages of an application response will > shift from being CPU/memory-bound to low-throughput trickles. > Yeah, all of this. [snipped the rest] Cheers -- Matthew Kerwin https://matthew.kerwin.net.au/ Unsubscribe: <mailto:[email protected]?subject=unsubscribe> <http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>