Closed waghanza closed 6 years ago
Interesting. I just run the tests on my early-2013 laptop and I got this:
┌────────────┬────────────┬─────────┬───────────────┐
│ │ Requests/s │ Latency │ Throughput/Mb │
├────────────┼────────────┼─────────┼───────────────┤
│ rayo.js │ 18501.6 │ 5.3 │ 2.08 │
├────────────┼────────────┼─────────┼───────────────┤
│ polka.js │ 18424.8 │ 5.32 │ 2.06 │
├────────────┼────────────┼─────────┼───────────────┤
│ fastify.js │ 17784 │ 5.5 │ 2.67 │
├────────────┼────────────┼─────────┼───────────────┤
│ express.js │ 14244 │ 6.89 │ 1.58 │
└────────────┴────────────┴─────────┴───────────────┘
There is a very slight difference between rayo
and polka
. On consecutive tests rayo
and polka
alternatively take the lead. I will conduct some more testing.
Running four individual tests, on the same machine, I get more consistency:
⨏ rayo.js (18696.8 + 18888 + 19276 + 18907.2
) ~= 18942
⨏ polka.js (17678.41 + 18146.41 + 18482.41 + 18615.2
) ~= 18230.60
It seems like running all tests, as a single suite, is polluting the environment and thus making the tests behave in an almost irrational way. I will keep digging (perhaps rewrite my benchmark suite)
Running the same tests, from the current master
, on a slightly newer laptop (early-2014, 2.3 GHz Intel Core i7, 16 GB 1600 MHz DDR3), yields:
┌────────────┬────────────┬─────────┬───────────────┐
│ │ Requests/s │ Latency │ Throughput/Mb │
├────────────┼────────────┼─────────┼───────────────┤
│ rayo.js │ 34078.4 │ 2.86 │ 3.76 │
├────────────┼────────────┼─────────┼───────────────┤
│ polka.js │ 33691.2 │ 2.89 │ 3.71 │
├────────────┼────────────┼─────────┼───────────────┤
│ fastify.js │ 32420.8 │ 3.01 │ 4.88 │
├────────────┼────────────┼─────────┼───────────────┤
│ express.js │ 26196.8 │ 3.73 │ 2.89 │
└────────────┴────────────┴─────────┴───────────────┘
Note that these laptops are doing a whole lot of other things while the tests are running (probably even doing some GC from the previous run while processing the next one), hence the results tend to have greater variances. I will get a dedicated server setup and see what comes back from it.
@aichholzer note that you are using https://github.com/mcollina/autocannon and we are using https://github.com/wg/wrk
the results might differs because of benchmarking tool and docker (I plan to compute results on cloud once)
PS : my test are running on an AMD 3.2 Ghz 8cores + 15.6 G RAM (so workstation
not a server
like -> results should be more realistic on some xeon
)
@aichholzer You can see some results https://github.com/waghanza/which_is_the_fastest
polka
and rayo
seems to be very much faster than japronto
DISCLAIMER
python
is on version3.6
=> https://store.docker.com/images/pythonnode
is on version10.4
=> https://store.docker.com/images/node
@aichholzer depending on test environment, polka
COULD have more req/s than rayo
.
To have some siginficants results, we have to use some environment (OS, hosting way) as close as production environments (which I target)
@waghanza -Seen those results. Man, mind-blowing 82656K reqs/sec
that's insane. I have not tested on v10
yet, but will soon do. I also intent to run this on a dedicated server.
As for polka
, yes, in some cases I have seen it do more requests than rayo
, 1 out of every 3 test runs, in my results, makes polka
faster. I guess, as you mentioned, being a workstation and not a server, results can't be too accurate.
Looking forward for more of your tests & results.
@aichholzer I hope that on the end of the year will be closest to real word usage (public cloud / private cloud / dedicated ...)
on node
8
framework | req / s | latency | 99 percentile | throughput |
---|---|---|---|---|
express | 53559.67 | 25640.67 | 326002.00 | 45.19 MB |
fastify | 71602.00 | 20705.33 | 279377.33 | 70.15 MB |
polka | 88939.67 | 16865.33 | 239279.33 | 42.67 MB |
rayo | 83562.33 | 14179.33 | 112692.00 | 41.67 MB |
but those results changes a lot, since on workstation
@waghanza -Thank you very much for your updates on this. I am closing this issue for now. Please keep me posted once you are able to run some more tests on a dedicated server. I'd be curious about those. Much appreciated.
🙇
@waghanza -I managed to setup a dedicated server (DigitalOcean, 32 GB RAM, 16 vCPUs, Ubuntu 16.04.4 x64) and run some tests, have a look. It's a very tight race between rayo
and polka
.
We are still beating each other randomly. Love it! @lukeed great work!
Heh, thanks! These two will likely always be trading places since rayo uses my matchit
under the hood, as does Polka. Routing is the biggest choke point in Node.js benchmarks, and since they share the same module, it should be the same.
After that, it's a matter of how much there is to sift thru to get from point A to point B. Polka could reach 100% of Node.js capacity with a change to one line of code...but that only applies to /
route and so is meaningless.
Not sure how the benchmarks are taken here, but traditionally, testing a path like users/123
is far more insightful. Even then, I'd still expect these two to be nearly identical.
(I separated trouter
and matchit
from Polka specifically so that others can roll their own server solutions like this👍)
Yep, matchit
indeed. That's one nice piece of software.
I have been experimenting with radix trees as well, but so far matchit
seems to be way to go. All tests are running against /users/:alias
and yes, results are almost the same throughout.
Anyway, keep it up @lukeed -perhaps there a is chance for us to collaborate on something in the future.
Thank you. Yes, I tested those out too. String comparison seems to be most reliable.
Noticed the benchmarks after my comment, glad to see.
Lastly, you may want to randomize the benchmarks order. I don't particularly care, but because we're running the same code, the framework that runs second (especially immediately after) has an advantage because it has 2x JIT warm-up for the same functions. Polka has ~3 function calls between request and response on a middleware-less app & I imagine Rayo does too. We'll always be within 1% of each other, so randomizing will help as a sanity check 😆
PS Glad to see you didn't commit to Express compatibility. That's been the worse part here. Would have been interesting to see what else Polka could have looked like without that constraint. Curious to check in in the future and see how you progress :)
...so randomizing will help as a sanity check
Love it! Thank you!
@aichholzer I see you have updated the results on https://github.com/GetRayo/rayo.js#how-does-it-compare
Personally (still on local
containers, so not very viable results), I found :
Framework | Version | Requests / s | Latency | 99 percentile | Throughput |
---|---|---|---|---|---|
express | 4.16.3 | 49761.00 | 37460.00 | 643972.67 | 27.66 MB |
fastify | 1.6.0 | 69025.00 | 25809.00 | 465868.00 | 44.90 MB |
polka | 0.4.0 | 82643.67 | 17195.33 | 244113.00 | 28.53 MB |
rayo | 1.0.5 | 83007.67 | 17387.33 | 257226.00 | 27.55 MB |
hapi | 17.5.1 | 31411.67 | 66621.33 | 1165915.00 | 9.60 MB |
PS : My benchmark tool are done by wrk
(on node10
) with :
Thank you @waghanza. As pointed out above; polka and rayo will keep jumping to the top spot, alternatively. Having that said, feel free to mention whichever you consider to be faster in your document. I appreciate the effort you have put into it, even more keeping me in the loop.
🤝👌
sure, I hope the next release of https://github.com/tbrand/which_is_the_fastest will have real-world result (i.e : benchmarks completely on digitalocean)
@lukeed @aichholzer I come up with this old thread :stuck_out_tongue:
We have migrated to https://github.com/the-benchmarker/web-frameworks, to enhance collaboration :heart:
Will be great to work together
Hi,
Using master, I have not the same results as show on
README
=> https://github.com/GetRayo/rayo.js/tree/master#node-v8112With the same version of
node
, I have :Details on hardware :