Open nwtgck opened 2 years ago
Your http1.1 server code is the duplicate of http2. It should be:
const fs = require("fs");
const https = require("https");
const server = https.createServer({
key: fs.readFileSync("./server.key"),
cert: fs.readFileSync("./server.crt"),
}, (req, res) => {
console.log("request accepted");
res.writeHead(200);
req.pipe(fs.createWriteStream("/dev/null"));
});
server.listen(8443, () => {
console.log("HTTPS (HTTP/1.1) listening on 8443 port");
});
Thank you for notifying me. I updated it. The full code is correct.
yeah! the h1 server is one time faster than the h2 one when receiving a file with a size of 10GB
fallocate -l 10G /tmp/dummy
roger@sgop $time curl -T /tmp/dummy -k https://localhost:8443 --http2
real 0m44.611s
user 0m9.168s
sys 0m13.328s
roger@sgop $time curl -T /tmp/dummy -k https://localhost:8443
real 0m23.406s
user 0m5.340s
sys 0m5.608s
note that i added a minor change to the original code, for h1 server
req.on('end', () => {
res.end()
})
for h2 server
stream.on('end', () => {
stream.end()
})
@ywave620 I'm curious, if you add -v
to both curl commands, what ciphers does it print?
both show ECDHE-RSA-AES128-GCM-SHA256
@bnoordhuis
I used perf
to profile either case and found a suspicious thing.
The percentage of CPU time spent in event polling(eventfd_write above) is similar in two cases, and given that uploading to a H2 server uses twice the total CPU time of uploading to H1 server, we can conclude that the H2 server spends twice CPU time in event polling.
I then use strace
to inspect syscall made by either server and I found that the H2 server call read(2)
much more than H1 server, this is compilable to the profile result, becasuse in Node, read(2)
is called after the event polling tells us a socket is ready for read.
Maybe the curl client is the blame? sending data in smaller chunk when using http2 so that the H2 server has to call read(2)
more?
eventfd_write()
suggests something is calling uv_async_send()
a lot because that's the only thing in node that uses an eventfd.
Can you try creating a tracepoint with perf probe -x node uv_async_send
and then collecting stack traces with perf record -g -e probe_node:uv_async_send node script.js
? I'd like to know what the top 5 or top 10 call stacks look like.
Maybe the curl client is the blame? sending data in smaller chunk when using http2 so that the H2 server has to call read(2) more?
Yes, that could be another reason it's slower. You should be able to gauge the size of the average read by eyeballing the system calls with strace. You can also use perf probe
but strace is probably easier.
Sorry for the delay. I failed to apply your command in my machine. But I found some crucial stuffs. The read/write syscall made by the HTTP2 curl client leads the HTTP1 one by an order of magnitude. I think this is the blame, data are sent in smaller chunks when using HTTP2. To reduce some unreliability, I write client in nodejs, and it turns out the same result! Uploading using HTTP2 is an order of magnitude slower than using HTTP1.
const http2 = require("http2");
const buf = Buffer.alloc(1024*1024, 0) // 1M
const s = http2.connect('https://localhost:8443', {
rejectUnauthorized: false,
});
s.on('connect', () => {
const req = s.request({}, {
endStream: false
});
req.on('error', ()=>{});
for (let i = 0; i < 1024*10; i ++) {
req.write(buf)
}
req.end();
});
time node h2-client.js
real 0m41.801s
user 0m11.716s
sys 0m5.360s
const https = require("https");
const buf = Buffer.alloc(1024*1024, 0) // 1M
const req = https.request({
host: 'localhost',
port: 8443,
rejectUnauthorized: false
}).on('error', ()=>{})
for (let i = 0; i < 1024*10; i ++) {
req.write(buf)
}
req.end();
time node h1-client.js
real 0m3.094s
user 0m0.780s
sys 0m2.248s
uv_async_send
suggests there are many FS operations, which is not contradictory to my argument, since the H2 server read the socket much more often, it has to write to the file more often. @bnoordhuis
This phenomenon is probably caused by HTTP2's flow control mechanism https://httpwg.org/specs/rfc7540.html#FlowControl
Does node.js 21 performance updates solve this bug? Did anybody test?
Thank you so much for the wonderful project!
Version
16.16.0
Platform
macOS Big Sur
What steps will reproduce the bug?
I create simple HTTPS servers over HTTP/1.1 and HTTP/2 which handle client-to-server upload.
Result
The result of HTTP/1.1 is as follows.
https://user-images.githubusercontent.com/10933561/184122262-e70c2e34-efe0-452f-bba2-8e9d46d6b492.mp4
The result of HTTP/2 is as follows.
https://user-images.githubusercontent.com/10933561/184122531-81168462-10d8-4a42-b142-1f9decaf1709.mp4
Code
Here are codes.
full code: https://github.com/nwtgck/public-code/tree/f2002382e1fae26d7821f37a7c50ad93ea83ce1e/node-http2-is-slower-than-http1.1
How often does it reproduce? Is there a required condition?
always
What is the expected behavior?
The expected behavior is the same transfer speed.
What do you see instead?
written in "What steps will reproduce the bug?"