nodejs / node

Node.js JavaScript runtime ✨🐢🚀✨
https://nodejs.org
Other
107.45k stars 29.54k forks source link

Clear sockets perform worse than TLS socket in some cases #27970

Open tufosa opened 5 years ago

tufosa commented 5 years ago

While writing and performing tests for this issue opened by @alexfernandez and fixed by @jmendeth, I noticed that in some cases, clear connections seem to perform worse than TLS connections, specially after the fix that enhances significantly the performance of the TLS connections. I will patch the existing secure-pair benchmark test to expose the issue.

The performance of clear connections seems to be the same across node versions (I've tested it in node 8, 10 and 13-pre). This means that this issue is not a regression, but an enhancement, and the fix needed is probably very similar to the one that solved the secure-pair performance issue.

tufosa commented 5 years ago

I've submitted a PR that includes clear connections in the secure-pair performance test and exposes this issue. The following table shows the results of the test run on my machine.

packet size MBs transferred(TLS) MBs transferred(clear)
2 99.3 2.9
100 1401.76 552.05
1024 1611.28 3231.61
1024 * 1024 2519.03 7797.26
jasnell commented 4 years ago

It's not clear if there's anything further actionable here.

mildsunrise commented 4 years ago

When constantly writing tiny buffers, TLS connections are much faster now because we concatenate them before passing to OpenSSL. For plaintext connections, all those buffers are sent directly to the kernel which seems to incur this overhead.

Maybe concatenating would be possible for plaintext sockets too? @nodejs/net

However I want to point out this seems like an artificial test case to me. When moving lots of data, chunks are unlikely to be tiny. And if they are tiny, you could easily concatenate them yourself in bigger ones.

alexfernandez commented 4 years ago

This test case tries to reproduce what @tufosa and I found at Devo: we used Node.js to receive syslog streams over a socket, and we usually received lots of small packets. It is not always that you have the luxury of deciding what packet size customers send you :sweat_smile:

Concatenating would be ideal for this particular application, and it would speed up reception for everyone else, right?

addaleax commented 4 years ago

We already support writev() for network sockets, so the next thing I could think off would be enabling Nagle’s algorithm where we currently disable it (e.g. HTTP/2), or build our own version of it?

jasnell commented 4 years ago

With HTTP/2 we really should not enable Nagle at all, but it can certainly be enabled elsewhere.

mildsunrise commented 4 years ago

IIRC we use writev and Nagle by default, something else may be causing this overhead. I'm just speculating, but I think passing tens of thousands of tiny buffers around (to libuv and the kernel) may be causing it. If I'm right, maybe not using writev (concatenating instead) would help.