Open CMCDragonkai opened 11 months ago
1500 operations per second right now on QUIC, and 15000 ops per second for just FFI to quiche. Plenty of room for optimisations.
One of the big things is optimising just how many send calls are needed.
I think it would be a good idea to add a perf hooks to the handleSocketMessage
to measure exactly how long it takes for 1 execution of that.
Oh and don't forget about adjusting the socket buffer. The longer the handleSocketMessage
takes, the more likely data in the UDP socket is being dropped. It drops the most recent data, thus forcing re-sending from the QUIC protocol.
Specification
Quiche's connection send and recv involve processing 1 or more QUIC packets (1 to 1 to a dgram packet) which each can contain multiple QUIC frames.
The processing of a single packet ends up dispatching into single connection to handle this data.
The
QUICSocket.handleSocketMessage
therefore needs to run as fast as possible in order to drain the kernel's UDP receive buffer.However atm, because all of the quiche operations run on the main thread, all of its work is done synchronously and thus blocks Node's main thread.
This can be quite heavy because processing the received frames involves cryptographic decryption, and also immediately sending back frames that involve cryptographic encryption.
JS multi-threading can be quite slow, with overheads around 1.4ms: https://github.com/MatrixAI/js-workers/pull/1#issuecomment-919739105. Meaning you need an operation that takes longer than 1.4ms to be worth it. Plus this will need to use the zero-copy transfer capability to ensure buffers are shared.
Native multi-threading is likely to be faster. The
napi-rs
bridge offers the ability to create native OS pthreads, which is separate from the libuv thread pool in nodejs because libuv thread pool is intended for IO, and quiche's operations is all CPU bound. However benching this will be important to understand how fast the operations are.Naive benchmarks of quiche recv and send pairs between client and server indicated these levels of performance:
Each iteration is 2 recv and send pairs, it is 2 because of both client and server.
The goal is to get js-quic code as close as possible to FFI native code, and perhaps exceed it.
Another source of slowdowns might be the FFI of napi-rs itself. This would be a separate problem though.
Additional context
Tasks
napi-rs
threadingjs-workers
to see if 1.4ms is still worth it?