Closed sebdeckers closed 7 years ago
Very nice. This is awesome. We definitely have a lot to do to tune performance. Right now, for instance, there are many small writes to the socket that are rather inefficient. We can and should do better there. Also, we need to optimize the maximum concurrent streams. I've noticed that keeping it limited to about 200 yields significantly higher performance than leaving it open ended. We'll have to play with various options here.
Agreed on all of that. 👍
I'm unable to complete an h2load run with >150 clients. Around 100 is a safe maximum. Could be a simple ulimit
issue; haven't spent much time yet to dig deeper.
Interestingly enough, the respondWithFile
approach performs extremely well at small response sizes. Probably the lack of JS/C++ overhead. But unfortunately it consistently fails to complete the test runs using larger sized files (512 KiB & 1 MiB).
I think we should start adding some benchmarks within this repo itself.
@mcollina Yea, that was just a little one day side project for fun. I would be happy to contribute to a more meaningful effort. Any guidance on the benchmark infrastructure of the Node.js repo?
I think the best guide is https://github.com/nodejs/http2/blob/master/doc/guides/writing-and-running-benchmarks.md.
Http benchmarks in core are... "Fun". Currently they require third party benchmarker tools. This will be about the same. I'm thinking about building h2load as part of the nghttp2 build and having it available for the http2 benchmarks but I haven't pushed forward on that yet.
On Wed, Jul 12, 2017 at 9:26 AM Matteo Collina notifications@github.com wrote:
I think the best guide is https://github.com/nodejs/http2/blob/master/doc/guides/writing-and-running-benchmarks.md .
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nodejs/http2/issues/177#issuecomment-314768168, or mute the thread https://github.com/notifications/unsubscribe-auth/AAa2eRRmPec57XAXNnWKvrbPHDirRdnrks5sNMlpgaJpZM4OUoH0 .
@jasnell I think we can have it installed externally, like we do with wrk. There is no need to build it internally. In due time I will add http2 capability to autocannon.
closing this given that we have the basic benchmarking mechanism in there now.
@jasnell Now that the Big PR is imminent, I was wondering how we're doing in terms of performance. Especially curious about serving lots of small files, as happens to be my use case.
Here is my preliminary data: https://benchmark.http2.live
(Note: Everything on the page is served by Node.js http2 using push. 😎)