[#86787] [Ruby trunk Feature#14723] [WIP] sleepy GC — ko1@...
Issue #14723 has been updated by ko1 (Koichi Sasada).
13 messages
2018/05/01
[#86790] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/01
[email protected] wrote:
[#86791] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Koichi Sasada <ko1@...>
2018/05/01
On 2018/05/01 12:18, Eric Wong wrote:
[#86792] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/01
Koichi Sasada <[email protected]> wrote:
[#86793] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Koichi Sasada <ko1@...>
2018/05/01
On 2018/05/01 12:47, Eric Wong wrote:
[#86794] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/01
Koichi Sasada <[email protected]> wrote:
[#86814] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Koichi Sasada <ko1@...>
2018/05/02
[#86815] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/02
Koichi Sasada <[email protected]> wrote:
[#86816] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Koichi Sasada <ko1@...>
2018/05/02
On 2018/05/02 11:49, Eric Wong wrote:
[#86847] [Ruby trunk Bug#14732] CGI.unescape returns different instance between Ruby 2.3 and 2.4 — me@...
Issue #14732 has been reported by jnchito (Junichi Ito).
3 messages
2018/05/02
[#86860] [Ruby trunk Feature#14723] [WIP] sleepy GC — sam.saffron@...
Issue #14723 has been updated by sam.saffron (Sam Saffron).
6 messages
2018/05/03
[#86862] Re: [Ruby trunk Feature#14723] [WIP] sleepy GC
— Eric Wong <normalperson@...>
2018/05/03
[email protected] wrote:
[#86935] [Ruby trunk Bug#14742] Deadlock when autoloading different constants in the same file from multiple threads — elkenny@...
Issue #14742 has been reported by eugeneius (Eugene Kenny).
5 messages
2018/05/08
[#87030] [Ruby trunk Feature#14757] [PATCH] thread_pthread.c: enable thread caceh by default — normalperson@...
Issue #14757 has been reported by normalperson (Eric Wong).
4 messages
2018/05/15
[#87093] [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase — ko1@...
Issue #14767 has been updated by ko1 (Koichi Sasada).
3 messages
2018/05/17
[#87095] [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase — ko1@...
Issue #14767 has been updated by ko1 (Koichi Sasada).
9 messages
2018/05/17
[#87096] Re: [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase
— Eric Wong <normalperson@...>
2018/05/17
[email protected] wrote:
[#87166] Re: [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase
— Eric Wong <normalperson@...>
2018/05/18
Eric Wong <[email protected]> wrote:
[#87486] Re: [Ruby trunk Feature#14767] [PATCH] gc.c: use monotonic counters for objspace_malloc_increase
— Eric Wong <normalperson@...>
2018/06/13
I wrote:
[ruby-core:86832] Re: [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid
From:
Eric Wong <normalperson@...>
Date:
2018-05-02 10:56:51 UTC
List:
ruby-core #86832
[email protected] wrote: > I found an interesting summary of EPOLLET, which I think explains it better than I did: https://stackoverflow.com/a/46634185/29381 Basically, it minimise OS IPC. Minimize syscalls, you mean. I completely agree EPOLLET results in the fewest syscalls. But again, that falls down when you have aggressive clients which are pipelining requests and reading large responses slowly. > > According to Go user reports, being able to move goroutines > > between native threads is a big feature to them. But I don't > > think it's possible with current Ruby C API, anyways :< > By definition Fibers shouldn't move between threads. If you > can move the coroutine between threads, it's a green thread > (user-scheduled thread). I don't care for those rigid definitions. They're all just bytes that's scheduled in userland and not the kernel. "Auto-fiber" and green thread are the same to me so this feature might become "green thread". > deadlocks and other problems of multiple threads. And as you > say, GVL is a big problem so there is little reason to use it > anyway. Again, native threads are still useful despite GVL. > > Fwiw, yahns makes large performance sacrifices(*) to avoid HOL > > blocking. > And yet it has 2x the latency of `async-http`. Can you tell me > how to test it in more favourable configuration? yahns is designed to deal with apps with both slow and fast endpoints simultaneously. Given N threads running, (N-1) may be stuck servicing slow endpoints, while the Nth one remains free to service ANY other client. Again, having max_events>1 as I mentioned in my previous email might be worth a shot for benchmarking. But I would never use that for apps where different requests can have different response times. > > The main thing which bothers me about both ET and LT is you have > > to remember to disable/reenable events (to avoid unfairness or DoS). > > Fortunately C++ RAII takes care of this. I'm not familiar with C++, but it looks like you're using EPOLL_CTL_ADD/DEL, but no EPOLL_CTL_MOD. Using MOD to disable events instead of ADD/DEL will save you some allocations and possibly extra locking+checks inside Linux. No need to use EPOLL_CTL_MOD to disable with oneshot, only rearm (this is what makes oneshot more expensive than ET in ideal conditions). > I just think it needs to be slightly more modular; but not in > a way that detracts from becoming a ubiquitous solution for > non-blocking IO. > It needs to be possible for concurrency library authors to > process blocking operations with their own selector/reactor > design. Really, I think it's a waste of time and resources to support these things. As I described earlier, the one-shot scheduler design is far too different to be worth shoehorning into dealing with a reactor with inverted control flow. I also don't want to make the Ruby API too big; we can barely come up with this API and semantics as-is... > I would REALLY like to see something like this. So, we can > explore different models of concurrency. Sometimes we would > like to choose different selector implementation for pragmatic > reasons: On macOS, kqueue doesn't work with `tty` devices. But > `select` does work fine, with lower performance. The correct thing to do in that case is to get somebody to fix macOS :) Since that's likely impossible, we'll likely support more quirks within the kqueue implementation and be transparent to the user. There's already a one quirk for dealing with the lack of POLLPRI/exceptfds support in kevent and I always expected more... Curious if you know this: if `select` works for ttys on macOS, does `poll`? In Linux, select/poll/ppoll/epoll all share the same notification internals (->poll callback); but from cursory reading of FreeBSD source; the kern_events stuff is separate and huge compared to epoll. > In addition, such a design let's you easily tune parameters > (like size of event queue, other details of the implementation > that can significantly affect performance). There's no need to tune anything. maxevents retrieved from epoll_wait/kevent is the only parameter and that grows as needed. Everything else (including maximum queue size) is tied to number of fibers/FDs which is already controlled by the application code. Unsubscribe: <mailto:[email protected]?subject=unsubscribe> <http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>