Closed MengRao closed 5 years ago
I don't have any results to share. Any results will be dependent on how you have tuned the kernel. Also note that for TCP and UDP benchmarks I have not added support for the linux SO_BUSY_POLL
flag.
The origin of this tool was when I was a heavy user of Erlang before they added support for CFFI and the only way to integrate C/C++ libs was to use TCP or Unix domain sockets. At that time 10 years ago on Solaris TCP and Unix sockets had same latency for IPC, but on Linux TCP sockets where much slower (5-10x, 3 vs 10us).
Today I would recommend to use kernel bypass for inter-server communication and SHM for inter-process communication. My two lock-free queues can be trivially modified for use as IPC https://github.com/rigtorp/SPSCQueue https://github.com/rigtorp/MPMCQueue
I don't have any results to share. Any results will be dependent on how you have tuned the kernel. Also note that for TCP and UDP benchmarks I have not added support for the linux
SO_BUSY_POLL
flag.The origin of this tool was when I was a heavy user of Erlang before they added support for CFFI and the only way to integrate C/C++ libs was to use TCP or Unix domain sockets. At that time 10 years ago on Solaris TCP and Unix sockets had same latency for IPC, but on Linux TCP sockets where much slower (5-10x, 3 vs 10us).
Today I would recommend to use kernel bypass for inter-server communication and SHM for inter-process communication. My two lock-free queues can be trivially modified for use as IPC https://github.com/rigtorp/SPSCQueue https://github.com/rigtorp/MPMCQueue
How do you use kernel bypass for inter-server communication, openonload?
And actually I've done some tryout on SHM based msg queues: https://github.com/MengRao/SPSC_Queue https://github.com/MengRao/MPSC_Queue https://github.com/MengRao/PubSubQueue
Openonload does accelerate IPC, but I'm not using it.
What kernel bypass method do you recommend for inter-server communication such as TCP which would go through NIC? As far as I know, openonolad + solarflare NIC is an option that's not hard to use.
TCPDirect
Please do you have any benchmark result to share? And why not use shared memory as IPC? It should have the lowest latency.