the-benchmarker / web-frameworks

Which is the fastest web framework?
MIT License
6.91k stars 641 forks source link

Add uWebSockets.js (Node) #1064

Closed ewwink closed 4 years ago

ewwink commented 5 years ago

It claim faster than fasthttp (go) and maybe can beat japronto for requests per seconds :wink:

benchmark

waghanza commented 5 years ago

I think this could be categorized as node, since app are written in javascript

waghanza commented 5 years ago

can we do an HTTP1/1 app with uWebSocket => without SSL ?

ewwink commented 5 years ago

I don't know, I can't find about that in the source code and docs

waghanza commented 5 years ago

I'm not against adding this here, but flagged as experimental since not "native"

mahdisml commented 5 years ago

I'm not against adding this here, but flagged as experimental since not "native"

its stable now . it have two version : cpp : https://github.com/uNetworking/uWebSockets javascript (NodeJS) : https://github.com/uNetworking/uWebSockets.js

i tested nodejs version , it have awesome performance ! it handled more requests than Go-FastHttp and Rust-Actix with less memory usage . i'm really surprised ! 😨

waghanza commented 5 years ago

@mahdisml what I meant bu saying not native, is that this implementation is not like native one (node or c++), not related to stability

thus, I'm happy to read any PR :heart:

/cc @aichholzer

waghanza commented 4 years ago

@alexhultman Did you consent of adding uWebSocket here ?

ghost commented 4 years ago

You are making mistakes in your benchmarking,

node (12.7) | sifrr (0.0) | 176261.33 | 155.18 MB c (11) | agoo-c (0.5) | 175676.67 | 101.52 MB

176261 is not "better" than 175676. They are the same. They are insignificantly different.

You are not reporting CPU-time usage, so it is impossible to debug why they are the same, but I bet you are making the same exact mistake TechEmpower make;

You're not keeping track of CPU-time usage so you cannot keep track of when other parts bottleneck (such as your networking set-up).

You end up making the same mistake as TechEmpower do; you benchmark nothing but noise.

The difference between 176261 and 175676 is nothing but random noise. Like reporting the background radiation from universe as significant data points in an experiment. Also that number is unreasonable for 4 CPU cores. I bet you are seeing like 40% CPU-time usage, basically idling.

I did compile and run agoo-c and siffr here locally and agoo-c is without a doubt significantly faster than siffr:

Running 10s test @ http://localhost:3000 4 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 72.60us 26.49us 3.10ms 83.36% Req/Sec 27.06k 5.07k 41.39k 66.58% 1087524 requests in 10.10s, 39.41MB read Requests/sec: 107678.13 Transfer/sec: 3.90MB

Running 10s test @ http://localhost:3000 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 72.43us 73.34us 5.48ms 99.63% Req/Sec 69.11k 8.26k 80.23k 56.93% 1388814 requests in 10.10s, 50.33MB read Requests/sec: 137508.98 Transfer/sec: 4.98MB

These are 1 single CPU-core at 100% CPU-time on a shitty laptop 8 years old.

For reference point, I also benchmarked µWS (the C++ project) and I got this:

Running 10s test @ http://localhost:3000 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 63.55us 23.17us 358.00us 76.93% Req/Sec 76.46k 7.93k 95.04k 65.84% 1536341 requests in 10.10s, 104.03MB read Requests/sec: 152124.66 Transfer/sec: 10.30MB

TLDR;

µWS.js and siffr are significantly slower than agoo-c and I do not feel like having µWS.js listed in the top based on nothing but random noise and other bottlenecks.

Same reasons I don't participate in TechEmpower; quantity over quality and in-depth analysis.

Thanks

ghost commented 4 years ago

What I'm saying is,

https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=plaintext

This test as linked above is completely botched and nonsensical as the top 17 servers all score identically at 99.9%. That is, they are being capped, truncated.

It's like measuring for the tallest man on earth but only bringing a 2 meter long tape measure and then reporting people's length with nanometer precision from that.

2 meter and 0.000000000000004 nanometer 2 meter and 0.000000000000005 nanometer <- WOW THIS MAN IS THE TALLEST! GIVE HIM A MEDAL!!

It gives you entirely and utterly botched results.

mahdisml commented 4 years ago

i think Ram Usage is also important and forgotten in this benchmark and TechEmpower !

waghanza commented 4 years ago

@alexhultman @mahdisml sure

we will add RAM usage ... before we can be considered as stable

@alexhultman as I understand, you did not consent having uwebsocket listed here ?

waghanza commented 4 years ago

@alexhultman on uwebsocket README there is also a comparison between some framework, and resource consumption is not revelead (only req/s) :heart:

ghost commented 4 years ago

That's incorrect, the chart states CPU-time normalized.

Anyways,

No I do not want uWS.js to be listed here as I explained very thoroughly it would be entirely pointless and inaccurate.

Thanks

waghanza commented 4 years ago

That's incorrect, the chart states CPU-time normalized.

But what about RAM consumed ?

But, anyway, I'll respect your decision and I'll ping you when this project is more stable :heart:

ghost commented 4 years ago

There is no point in continuing this. I think you're doing pseudo-science here, failing to understand what you are actually trying to measure and why.

Thanks

waghanza commented 4 years ago

no we are going steps by steps @alexhultman README warns you about the status of this results

we are completely aware that the results are not ready to be usable, but which project could claim to be stable at his youth :stuck_out_tongue:

resource usage will be added (but I do not know when because I'm the only core dev here),

No need to argue here, but no need to defame any project. The maturity of uWebSocket in comparison of this project is not the same, neither is the scope ... It interesting to point mistakes, but no need to pay any disrespect to any community / contributor(s)

:heart:

ghost commented 4 years ago

I can see that your results now show a significant difference between agoo and sifr. Did you change something?

ghost commented 4 years ago

@waghanza Sorry to ping you but,

When I checked last time you had these results:

node (12.7) | sifrr (0.0) | 176261.33 | 155.18 MB c (11) | agoo-c (0.5) | 175676.67 | 101.52 MB

and now you have these

c (11) | agoo-c (0.5) | 210232.67 | 121.65 MB node (12.7) | sifrr (0.0) | 188853.33 | 166.09 MB

showing a completely different outcome.

Did you fix anything here? Now the results at least fall in the correct order.

pedrosimao commented 4 years ago

@alexhultman with the new changes do you consent the use of uWebSockets.js here? I am very curious to see how uWebSockets.js compare to others. Do you know any trustworthy benchmark?

waghanza commented 4 years ago

@alexhultman The above table is sorted. The order is dynamic and could change by two things :

BTW, the table has change, we now only display req/s (but other indicators will come in a far future)

ghost commented 4 years ago

Now you're listing sifrr at 164423 and agoo-c at 143936. These are highly chaotic reports, entirely contradictory to past reports. You could get more reliable data points by sampling the aurora. I have no interest to partake in nonsensical pseudo science.

waghanza commented 4 years ago

no arguing @alexhultman, I was just explaining why the order switch ...

and moreover, we are not yet in production, using local docker is not what I can say as production :stuck_out_tongue: