Closed elcritch closed 1 month ago
On my experiments both socketpair
and pipe
provides same performance and sometimes socketpair
is better. The only reason to choose socketpair
is that its possible to do read(MSG_PEEK)
to check if data is available in socket, there is no way to perform this check on pipe
, but it is required for this implementation. Also i know that its not that performant as on Linux/Windows, but at least it is present.
EVFILT_USER
was interesting case, but it only supported by FreeBSD and NetBSD, while OpenBSD, DragonflyBSD and MacOS do not support this functionality.
Sorry, looks like it is present in MacOS but its undocumented.
On my experiments both socketpair and pipe provides same performance and sometimes socketpair is better. The only reason to choose socketpair is that its possible to do read(MSG_PEEK) to check if data is available in socket, there is no way to perform this check on pipe, but it is required for this implementation.
Nice! Good to know as I wasn't sure how they compared.
EVFILT_USER was interesting case, but it only supported by FreeBSD and NetBSD, while OpenBSD, DragonflyBSD and MacOS do not support this functionality.
Yah I think MacOS and FreeBSD make up a fair bit of the "market share" for development or servers, respectively, for BSD's.
Sorry, looks like it is present in MacOS but its undocumented.
Yah odd it's not in the man page. One stackoverflow thread said it has been available since macos 10.6. Though I suspect Apple wants most developers to use Grand Central Dispatch rather than the raw mechanisms.
Looks like libevent uses it as well. The main thing I noticed with it was a bit of latency improvement.
I might try and make a PR at some point, as I'll be using the asynchronous thread notification and develop on Mac. :)
After more deep investigation i came to the conclusion not to start any development on this issue. The biggest problem here described in manual pages:
EVFILT_USER Establishes a user event identified by ident which
is not associated with any kernel mechanism but is
triggered by user level code.
So it limits our abilities in cross-thread communication. And this brings a lot of complications to implementation.
In our current async scheme we creating new kqueue
instance for every thread, and you can't use THREAD_A
kqueue handle to trigger event in THREAD_B
kqueue handle. To actually do this you need to know kqueue
file descriptor. It means that we should keep list of kqueue
file descriptors which could wait for this event. In case of THREAD_A
is consumer and THREAD_B
is a producer everything become more or less easy. THREAD_B
will use only single file descriptor which is stored with some Event object and will activate proper event, but as you can see it become more complicated when number of consumers is more than one.
Probably there is exists some method to avoid all the complexity and achieve needed results, but we already have cross-OS compatible primitives and right now i do not see how desired behavior could be emulated using EVFILT_USER
.
Great work getting https://github.com/status-im/nim-chronos/pull/406 in! It'll be handy.
It looks like Chronos is using
AsyncFD
which reminded me of a PR I'd done a while back on MacOSX for in the Nim stdlib io_selectors to use kEVENTs for async thread triggers. It might be useful for Chronos thread async as well on *BSDs.On BSD platforms
AsyncFD
looks to usesocketpair
as well. On many systems that's normally similar to apipe
that the Nim stdlib uses for asyncfds on BDSs.pipe
works reliably, but when you push lots of events through the timing can become a quite erratic.socketpair
probably also goes through caching & network buffers, which likepipe
, might behave oddly under load.Linux's
eventfd
implements an actual semaphore object which performs much better. On *BSD's thekqueue
mechanism has user events that are similar toeventfd
. Complete aside: Zephyr RTOS also provideseventfd
.In my case, I was using it for async thread notifications in Fidgetty and was a bit annoying. Sometimes it would take +500ms to notify when under heavy load (as in thousands of events being triggered per second). It wasn't an issue on Linux, or under lighter loads. Granted, I didn't do quantitative timing tests but the effect it pretty visible in a 120hz GUI.
If there's interest I could possibly look into it. A lot of dev's use MacOS for testing / development so it may be useful in order to make dev performance match Linux / Windows.
Background
It appears on MacOS that
pipe
(which Nim AsyncFD uses) andsocketpair
(which chronos uses) may be implemented distinctly according to this LWN article. Thesocketpair
may not exhibit the same lag / jitter issues.