[#86787] [Ruby trunk Feature#14723] [WIP] sleepy GC — ko1@...
Issue #14723 has been updated by ko1 (Koichi Sasada).
13 messages
2018/05/01
[#86790] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/01
[email protected] wrote:
[#86791] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Koichi Sasada <ko1@...>
2018/05/01
On 2018/05/01 12:18, Eric Wong wrote:
[#86792] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/01
Koichi Sasada <[email protected]> wrote:
[#86793] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Koichi Sasada <ko1@...>
2018/05/01
On 2018/05/01 12:47, Eric Wong wrote:
[#86794] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/01
Koichi Sasada <[email protected]> wrote:
[#86814] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Koichi Sasada <ko1@...>
2018/05/02
[#86815] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/02
Koichi Sasada <[email protected]> wrote:
[#86816] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Koichi Sasada <ko1@...>
2018/05/02
On 2018/05/02 11:49, Eric Wong wrote:
[#86847] [Ruby trunk Bug#14732] CGI.unescape returns different instance between Ruby 2.3 and 2.4 — me@...
Issue #14732 has been reported by jnchito (Junichi Ito).
3 messages
2018/05/02
[#86860] [Ruby trunk Feature#14723] [WIP] sleepy GC — sam.saffron@...
Issue #14723 has been updated by sam.saffron (Sam Saffron).
6 messages
2018/05/03
[#86862] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/03
[email protected] wrote:
[#86935] [Ruby trunk Bug#14742] Deadlock when autoloading different constants in the same file from multiple threads — elkenny@...
Issue #14742 has been reported by eugeneius (Eugene Kenny).
5 messages
2018/05/08
[#87030] [Ruby trunk Feature#14757] [PATCH] thread_pthread.c: enable thread caceh by default — normalperson@...
Issue #14757 has been reported by normalperson (Eric Wong).
4 messages
2018/05/15
[#87093] [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase — ko1@...
Issue #14767 has been updated by ko1 (Koichi Sasada).
3 messages
2018/05/17
[#87095] [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase — ko1@...
Issue #14767 has been updated by ko1 (Koichi Sasada).
9 messages
2018/05/17
[#87096] Re: [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase
— Eric Wong <normalperson@...>
2018/05/17
[email protected] wrote:
[#87166] Re: [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase
— Eric Wong <normalperson@...>
2018/05/18
Eric Wong <[email protected]> wrote:
[#87486] Re: [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase
— Eric Wong <normalperson@...>
2018/06/13
I wrote:
[ruby-core:86973] Re: [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid
From:
Eric Wong <normalperson@...>
Date:
2018-05-10 21:09:22 UTC
List:
ruby-core #86973
[email protected] wrote: > I hacked in EPOLLONESHOT semantics into my runloop. IT was > about the same performance. But when I leveraged it correctly > (calling `EPOLL_CTL_ADD` when accepting IO once, and > `EPOLL_CTL_DEL` when closing IO, then `EPOLL_CTL_MOD` when > waiting for event), I saw a 25% improvement in throughput. It > was just a very rough test case but interesting none the less. I would not expect one-shot to improve things unless you design your application around it. It also won't help if you only expect to deal with well-behaved clients and your application processing times are uniform. One-shot helps with application design and allows in resource migration when sharing the queue across threads. Again, this design may harm overall throughput and performance under IDEAL conditions. That's because there is a single queue and assumes all requests can be processed at roughly the same speed. However in NON-IDEAL conditions, some endpoints are handled more slowly than others. They are slow because the application needs to do more work, like an expensive calculation or FS access, NOT because of a "slow client". One-shot also makes application design easier when an evil client which is aggressively pipelining requests to request large responses, yet reading slowly. Thus, the evil client is fast at writing requests, but slow at reading responses. Server and reactor designers sometimes don't consider this case: I haven't checked in years, but EventMachine was a huge offender here since it didn't allow disabling read callbacks at all. What happened was evil clients could keep sending requests, and the server would keep processing them and writing responses to a userspace buffer which the evil client was never draining. So, eventually, it would trigger OOM on the server. Non-oneshot reactor designs need to consider this attack vector. One-shot designs don't even need to think about it, because it's not "reacting" with callbacks. One-shot uses EPOLL_CTL_MOD (or EV_ADD) only when the reader/writer hits EAGAIN. With one-shot you won't have to deal with disabling callbacks which blindly "react" to whatever evil clients send you. So in my experience, one-shot saves me a lot of time since I don't have to keep track of as much state in userspace and remember to disable callbacks from firing if an evil client is sending requests faster than they're reading them. Unsubscribe: <mailto:[email protected]?subject=unsubscribe> <http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>