Closed osevan closed 2 years ago
The problem with benchmarks is that they're complicated and especially time consuming to correctly make. For example:
one must make sure that nothing runs on the same machine as the benchmark; (thus a completely empty machine is needed;)
then one must pin the benchmark server and client on different CPU cores (and in case of hyper-threading getting the core / thread layout is important); (it would be best to have the client and server on separate machines, but that would also benchmark the physical connection;)
obviously just using a VM is not enough, as your benchmarks could be impacted by "noisy neighbors" (i.e. other VM's running on the same host), thus real hardware is needed;
afterwards you must get very well acquainted with the system you are trying to benchmark, and fine-tuning it to actually get the best performance out of that system for the intended use-case; (for example Nginx has quite a performance discrepancy when asking for /some-folder/index.html
vs /some-folder/
;) without doing this (for example by just using defaults) you are doing a disservice to the system you are comparing to;
Thus, although it would be tempting to benchmark against H2O, or any other high-performance server, it is a very time consuming operation, time that would be perhaps better spent improving my software. :)
So, if you can provide me with a proper scenario, accompanying example data and configuration file, if I have the time, I might do it.
To put things into perspective another way: I've just looked at Cisco's small-business switches and chosen the "best one" in that category, the "Cisco Business 350 Series"; assume you have a small data-center where you deploy the following: your Cisco switch, a router, and a web server; that Cisco switch is rated at ~14 million packets per second (at 64 bytes).
So doing some simple math: each request / response requires a minimum of 2 IP packets (I will assume that one uses TCP FastOpen, and that the request fits in one packet, and the response the same); then each of those packets has to pass through the switch twice (internet -> router, then router -> server); now taking the packets per second rating and halving that (because only the TCP+IP overhead is ~40 bytes which would leave only 24 bytes for the actual HTTP request), then halving that for the request / response, then halving that due to the two hops mentioned above yields around ~1.75 million packets per second.
My benchmark of Kawipiko on my laptop (with a single thread) (with other stuff running around) puts it at ~100K requests per second, thus ~200K packets per second, that is around 1/9 of the switch capacity. Thus with one or two modest desktops I would quickly saturate the capacity of the switch to do its job, let alone any bandwidth limitations or routing.
So, based on this train of thought I would say Kawipiko is good enough. :)
H2O Webserver got same usecase.
Could you benchmark http 1.0 without ssl each other?
https://github.com/h2o/h2o
Thanks and
Best regards