[#86787] [Ruby trunk Feature#14723] [WIP] sleepy GC — ko1@...
Issue #14723 has been updated by ko1 (Koichi Sasada).
13 messages
2018/05/01
[#86790] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/01
[email protected] wrote:
[#86791] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Koichi Sasada <ko1@...>
2018/05/01
On 2018/05/01 12:18, Eric Wong wrote:
[#86792] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/01
Koichi Sasada <[email protected]> wrote:
[#86793] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Koichi Sasada <ko1@...>
2018/05/01
On 2018/05/01 12:47, Eric Wong wrote:
[#86794] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/01
Koichi Sasada <[email protected]> wrote:
[#86814] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Koichi Sasada <ko1@...>
2018/05/02
[#86815] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/02
Koichi Sasada <[email protected]> wrote:
[#86816] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Koichi Sasada <ko1@...>
2018/05/02
On 2018/05/02 11:49, Eric Wong wrote:
[#86847] [Ruby trunk Bug#14732] CGI.unescape returns different instance between Ruby 2.3 and 2.4 — me@...
Issue #14732 has been reported by jnchito (Junichi Ito).
3 messages
2018/05/02
[#86860] [Ruby trunk Feature#14723] [WIP] sleepy GC — sam.saffron@...
Issue #14723 has been updated by sam.saffron (Sam Saffron).
6 messages
2018/05/03
[#86862] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/03
[email protected] wrote:
[#86935] [Ruby trunk Bug#14742] Deadlock when autoloading different constants in the same file from multiple threads — elkenny@...
Issue #14742 has been reported by eugeneius (Eugene Kenny).
5 messages
2018/05/08
[#87030] [Ruby trunk Feature#14757] [PATCH] thread_pthread.c: enable thread caceh by default — normalperson@...
Issue #14757 has been reported by normalperson (Eric Wong).
4 messages
2018/05/15
[#87093] [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase — ko1@...
Issue #14767 has been updated by ko1 (Koichi Sasada).
3 messages
2018/05/17
[#87095] [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase — ko1@...
Issue #14767 has been updated by ko1 (Koichi Sasada).
9 messages
2018/05/17
[#87096] Re: [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase
— Eric Wong <normalperson@...>
2018/05/17
[email protected] wrote:
[#87166] Re: [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase
— Eric Wong <normalperson@...>
2018/05/18
Eric Wong <[email protected]> wrote:
[#87486] Re: [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase
— Eric Wong <normalperson@...>
2018/06/13
I wrote:
[ruby-core:86826] Re: [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid
From:
Eric Wong <normalperson@...>
Date:
2018-05-02 07:54:22 UTC
List:
ruby-core #86826
[email protected] wrote: > So, it seems like your design has unavoidable contention (and > therefore latency) because you need to send events between > threads, which is what I expected. However, you argue this > overhead should be small. I'd like to see actual numbers TBH. Any contention is completely masked by GVL at the moment. When we get Guilds or get rid of GVL, of course I'll try per-core schedulers. As it stands, per-Thread schedulers will be a disaster for systems with hundreds of native Threads (because those threads are needed for servicing hundreds of slow disks) > And as you state, it's not possible (nor desirable IMHO) to > move fibers between threads. Yes, head-of-line blocking might > be an issue. Moving stacks between CPU cores is not without > it's own set of overheads. If you have serious issues with > head-of-line blocking it's more likely to be a problem with > your code (I've directly experienced this and the result was: > https://github.com/socketry/async-http/blob/ca655aa190ed7a89b601e267906359793271ec8a/lib/async/http/protocol/http11.rb#L93). Fwiw, yahns makes large performance sacrifices(*) to avoid HOL blocking. According to Go user reports, being able to move goroutines between native threads is a big feature to them. But I don't think it's possible with current Ruby C API, anyways :< > It would be interesting to see exactly how much overhead is > incurred using a shared epoll. I know from my testing that It's high for yahns because the default max_events for epoll_wait is only 1. (*) Throughput should increase with something reasonable like 64, but you will lose HOL blocking resistance. Auto-fiber starts at max_events 8 and auto-increases if needed. Fwiw, HOL blocking considerations for auto-fiber and yahns are completely different so I don't see a downside to increasing max_events with auto-fiber aside from memory use. With yahns Transfer-Encoding:chunked responses, yahns prioritizes latency of each individual chunk over overall throughput, so it loses in throughput performance, too. Maybe there can be an option to change that *shrug* > It's prepared but not added to the reactor here (it's lazy): https://github.com/kurocha/async-http/blob/eff77f61f7a85a3ac21f7a8f51ba07f069063cbe/source/Async/HTTP/V1/Protocol.cpp#L34 > > By calling wait, the fd is inserted into the reactor/selector: https://github.com/kurocha/async/blob/2edef4d6990259cc60cc307b6de2ab35b97560f1/source/Async/Protocol/Buffer.cpp#L254 > The cost of adding/removing FDs is effectively constant time > given an arbitrary number of reads or writes. We shouldn't > preclude implementing this model in Ruby if it makes sense. As > you say, the overhead of the system call is pretty minimal. Right, for auto-fiber in Ruby is lazy add (and the scheduler is lazily created). > Now that you mention it, I'd like to compare EPOLLET vs > EPOLLONESHOT. It's an interesting design choice and it makes a > lot of sense if you are doing only one read in the context of > a blocking operation. To me, they're drastically different in terms of programming style. ET isn't too different than level-triggered (LT), but I'd say the trickiest of the bunch. I would say ET is trickier than LT to avoid head-of-line blocking because you need to keep track of undrained buffers (from clients which pipeline aggressively) and not lose them if you want to temporarily yield to other clients. I suppose Edge and Level trigger invite a "reactive" design and inverted control flow. The main thing which bothers me about both ET and LT is you have to remember to disable/reenable events (to avoid unfairness or DoS). Under ideal conditions (clients not trying to DoS or be unfair to other clients), ET can probably be fastest. Just totally unrealistic to expect ideal conditions. So I strongly prefer one-shot because you don't have to deal with disabling events. This is especially useful when there's aggressive pipelining going on (e.g. client sending you requests quickly, yet reading responses slowly to fill up your output buffers). one-shot makes it a queue, so that also invites sharing across threads. The way auto-fiber uses one-shot doesn't invert the control flow at all. (Though you could, yahns does that). Instead, auto-fiber feels like the Linux kernel scheduler API: ``` /* get stuck on EAGAIN or similar, can't proceed */ add_wait_queue_and_register(&foo) /* list_add + epoll_ctl */ while (!foo->ready) { /* * run epoll_wait and let other threads/fibers run * Since we registered foo with the scheduler, it can become * ready at any time while schedule() is running; */ schedule(); } remove_wait_queue(&foo) /* list_del via rb_ensure */ /* resume whatever this fiber was doing */ ``` Unsubscribe: <mailto:[email protected]?subject=unsubscribe> <http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>