the-benchmarker / web-frameworks

Which is the fastest web framework?
MIT License
6.91k stars 641 forks source link

Cloudify #632

Closed waghanza closed 4 years ago

waghanza commented 5 years ago

Hi,

Results are actually computed from a local docker. This decision has been taken to facilitate frameworks addition, but mess-up the results. The final goal of this tool is to present closest to (as we can) production results.

The idea behind this PR is to run on a cloud, for this DigitalOcean

Frameworks :

Display comprehensive results

... has to be defined ...

Regards,

waghanza commented 5 years ago

@OvermindDL1 you will be happy, I've found a way to run on baremetal => https://www.vultr.com/features/baremetal/

but after all stuff before :stuck_out_tongue:

OvermindDL1 commented 5 years ago

@waghanza Ooo, that would remove all the container noise!

proyb6 commented 5 years ago

Interesting, you may want to run on server in Singapore region as its a well connected in the region. Skylark processor is definitely faster.

waghanza commented 5 years ago

@OvermindDL1 yeah, the goal is to remove dockerfile (but still docker used for ci, for example) and to replace it with auto-generated ones

@proyb6 on digital ocean or vultr? theire is not the same hardware ?

proyb6 commented 5 years ago

@waghanza I'm on UpCloud VPS. I have no experience on the dedicated server

waghanza commented 5 years ago

@proyb6 ok, I've pick digitalocean, because of I can easily work on that (cli is great ...), but nothing is engraved in stone

waghanza commented 5 years ago

@aichholzer In node app.js, port number (3000) is hard-coded. Is there a way to give the port number to pm2 (env or command line option) to avoid harcoding it ?

waghanza commented 5 years ago

I can run successfully, php / node / ruby / python:tada: (one implementation per language)

:warning::warning::warning::warning::warning::warning::warning::warning::warning:

DEVELOPER PREVIEW

:warning::warning::warning::warning::warning::warning::warning::warning::warning:
Language (Runtime) Framework (Middleware) Average 50th percentile 90th percentile 99th percentile 99.9th percentile Standard deviation
ruby (2.5) sinatra (2.0) 50.85 ms 49.55 ms 90.62 ms 97.49 ms 3833.14 ms 120721.33
node (8.11) express (4.16) 306.04 ms 269.80 ms 338.58 ms 1603.92 ms 2962.35 ms 231825.67
python (3.6) flask (1.0) 326.56 ms 155.02 ms 489.84 ms 3705.62 ms 7878.10 ms 641034.67
php (7.2) slim (3.11) 245.22 ms 53.03 ms 568.62 ms 2908.39 ms 7559.61 ms 584944.67
waghanza commented 5 years ago

@qti3e @aichholzer I have successfully add all node frameworks, except foxify (https://github.com/foxifyjs/foxify/issues/4) and polka using turbo

@aichholzer I use ssh to execute command (setup environment) for each framework, don't hesitate to take a look at https://github.com/waghanza/http-benchmark/blob/cloudify/tools/jobs/cloud/digitalocean.yml#L69 since I do not know any node best-practices

waghanza commented 5 years ago

Hi,

I've succefully perform a benchmark on 8 vCPU (32 Gb RAM) droplets on a private fiber chanel link of 2Gb/s provided by https://digitalocean.com

:tada: The results are completely different that those on docker

Language (Runtime) Framework (Middleware) Average 50th percentile 90th percentile 99th percentile 99.9th percentile Standard deviation
ruby (2.5) sinatra (2.0) 11.75 ms 10.15 ms 16.63 ms 30.69 ms 3842.26 ms 47078.00
node (11.1) express (4.16) 42.83 ms 36.29 ms 47.22 ms 227.59 ms 626.45 ms 33512.67
node (11.1) restana (2.3) 62.72 ms 28.94 ms 174.95 ms 466.94 ms 2425.04 ms 102428.00
node (11.1) polka (0.5) 41.14 ms 36.19 ms 45.05 ms 221.00 ms 559.10 ms 30073.67
node (11.1) rayo (1.2) 62.65 ms 27.85 ms 179.93 ms 464.59 ms 2378.61 ms 102432.00
node (11.1) koa (2.6) 27.95 ms 21.00 ms 35.70 ms 158.78 ms 723.10 ms 34189.67
node (11.1) restify (7.2) 43.66 ms 28.47 ms 83.20 ms 263.69 ms 1106.70 ms 54632.67
node (11.1) hapi (17.7) 41.18 ms 36.84 ms 48.98 ms 196.08 ms 515.85 ms 26186.67
python (3.7) flask (1.0) 46.58 ms 36.90 ms 64.77 ms 245.19 ms 867.70 ms 41894.33
php (7.2) slim (3.11) 42.52 ms 27.08 ms 66.61 ms 248.01 ms 1416.86 ms 52541.00
php (7.2) symfony (4.1) 68.91 ms 34.02 ms 167.55 ms 526.67 ms 1018.86 ms 108240.00
php (7.2) laravel (5.7) 60.79 ms 28.40 ms 169.77 ms 457.83 ms 2516.27 ms 99251.67
waghanza commented 5 years ago

Those results are also very different that tfb https://www.techempower.com/benchmarks/#section=data-r17&hw=cl&test=plaintext&f=zhazn3-zijawv-zik0z3-zik0zj-zijunz-zik0zj-zik0zj-v2qiv3-e3

Any idea @OvermindDL1 @aichholzer ?

waghanza commented 5 years ago

Hi,

I have run an other series of benchmark, and here are the results :

Language (Runtime) Framework (Middleware) Average 50th percentile 90th percentile 99th percentile 99.9th percentile Standard deviation
ruby (2.5) sinatra (2.0) 153.06 ms 17.64 ms 340.47 ms 1314.46 ms 7041.20 ms 313760.00
node (11.1) express (4.16) 83.40 ms 19.34 ms 174.61 ms 931.55 ms 3122.20 ms 177937.00
node (11.1) restana (2.3) 146.97 ms 20.60 ms 325.35 ms 1247.90 ms 5860.98 ms 272487.00
node (11.1) fastify (1.13) 110.68 ms 15.18 ms 233.89 ms 1039.58 ms 6278.89 ms 239866.67
node (11.1) polka (0.5) 172.78 ms 18.53 ms 399.96 ms 1492.22 ms 6587.86 ms 323867.67
node (11.1) rayo (1.2) 186.94 ms 14.95 ms 427.21 ms 2044.79 ms 6387.62 ms 391802.00
node (11.1) koa (2.6) 312.13 ms 49.46 ms 840.66 ms 3521.10 ms 7358.35 ms 668807.33
node (11.1) restify (7.2) 182.91 ms 14.19 ms 458.55 ms 1888.12 ms 5953.58 ms 371578.67
node (11.1) hapi (17.7) 212.23 ms 16.32 ms 482.96 ms 2724.93 ms 7135.48 ms 489910.33
python (3.7) flask (1.0) 131.49 ms 15.41 ms 322.20 ms 1199.11 ms 4660.80 ms 236766.33
php (7.2) slim (3.11) 196.76 ms 30.62 ms 451.03 ms 1993.57 ms 7405.95 ms 409774.33
php (7.2) symfony (4.1) 190.51 ms 13.92 ms 471.72 ms 1889.59 ms 7307.76 ms 388696.00
php (7.2) laravel (5.7) 152.97 ms 16.22 ms 344.71 ms 1547.87 ms 7128.82 ms 341765.00

Anyone has an idea of the fluctuation ?

waghanza commented 5 years ago

I think I understand. The problem was that using parallelism (neph) was messing-up the results. We can use --seq on neph to run task one by one, and there it is :

Language (Runtime) Framework (Middleware) Average 50th percentile 90th percentile 99th percentile 99.9th percentile Standard deviation
ruby (2.5) sinatra (2.0) 12.90 ms 8.93 ms 29.36 ms 60.69 ms 204.28 ms 13101.67
node (11.1) express (4.16) 54.01 ms 51.15 ms 65.42 ms 90.74 ms 428.97 ms 16574.33
node (11.1) restana (2.3) 53.54 ms 51.03 ms 64.72 ms 85.96 ms 481.57 ms 18229.00
node (11.1) fastify (1.13) 52.71 ms 50.12 ms 63.73 ms 87.48 ms 522.52 ms 18913.00
node (11.1) polka (0.5) 57.44 ms 54.53 ms 67.78 ms 119.29 ms 587.14 ms 22379.00
node (11.1) rayo (1.2) 55.46 ms 52.76 ms 66.26 ms 99.19 ms 575.22 ms 18112.33
node (11.1) koa (2.6) 62.28 ms 58.14 ms 77.75 ms 145.50 ms 686.83 ms 23948.33
node (11.1) restify (7.2) 53.66 ms 48.78 ms 70.45 ms 191.47 ms 461.33 ms 28574.00
node (11.1) hapi (17.7) 56.69 ms 49.92 ms 74.54 ms 205.75 ms 838.00 ms 41646.00
python (3.7) flask (1.0) 54.59 ms 49.58 ms 81.86 ms 156.30 ms 561.55 ms 28670.00
php (7.2) slim (3.11) 58.22 ms 51.05 ms 82.30 ms 131.54 ms 529.87 ms 25751.67
php (7.2) symfony (4.1) 100.92 ms 45.10 ms 194.45 ms 916.85 ms 1130.53 ms 180127.00
php (7.2) laravel (5.7) 54.31 ms 48.57 ms 60.66 ms 249.58 ms 844.05 ms 35064.67
waghanza commented 5 years ago

@johng-cn I am working on gf. First, I compile on ubuntu 18.10 (go 1.11) and I upload it to a droplet, but I have a segmentation fault when running (compilation on droplet work)

here is my dockerfile => https://github.com/waghanza/http-benchmark/blob/cloudify/go/gf/digitalocean.dockerfile

@kataras @appleboy Is it a best-practice to compile before deploy in go ?

gqcn commented 5 years ago

@waghanza The segmentation fault surely be a runtime error. If you compile and run in different machines, you may use static/cross compilation. Try changing your RUN go build . to RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build . in the dockerfile (or you can set another three ENV variables: CGO_ENABLED, GOOS, GOARCH).

I am not sure what GOOS and GOARCH should be set in droplet. But you can find fully supported GOOS and GOARCH in https://golang.org/doc/install/source .

waghanza commented 5 years ago

@johng-cn I need cross-compilation even if I compile and execute on the same system (ubuntu 18.10) ?

gqcn commented 5 years ago

@waghanza It's sometime not neccessary if compile and run in same OS(ubuntu 18.10). But I suggest you using cross-compilation when you want distribution. Have a try, if error persists, call me.

waghanza commented 5 years ago

@johng-cn On which number :stuck_out_tongue: It seems to behave with the same (or not) segmentation fault with https://github.com/waghanza/http-benchmark/blob/cloudify/go/gf/digitalocean.dockerfile

gqcn commented 5 years ago

@waghanza Well , let me have a check it locally.

waghanza commented 5 years ago

@johng-cn The binary server compiled with bin/benchmark compile -l go -f gf (in go/gf) runs on my machine, not an ubuntu

gqcn commented 5 years ago

@waghanza OK, I compiled gf using your digitalocean.dockerfile, and then deployed it (scp the binary from docker) to another linux OS(ubuntu server 14.04). It runs smoothly.

If you want to cross-compile it, and then run it in your own machine, I'd think you should change the GOOS environment variable in the dockerfile. What's your OS? If MacOS, you should use darwin, like CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 go build .. 🤣

waghanza commented 5 years ago

@johng-cn I'm on linux -> fedora

I'll retry this, I think I have found the issue -> on my upload process :stuck_out_tongue_closed_eyes:

gqcn commented 5 years ago

@waghanza OK, nice 😂 .

waghanza commented 5 years ago

@johng-cn The error was here => https://github.com/waghanza/http-benchmark/blob/cloudify/tools/providers/digitalocean.cr#L156 (this use libssh2 )

waghanza commented 5 years ago

@johng-cn Is there a recommanded way to run, a go app in background ? systemd or else

gqcn commented 5 years ago

@waghanza The systemd or service could make simple things a little bit complicated. The simplest way to run process in background in linux, is using nohup or & . And sometimes when in SSH connection, I may use tmux.

waghanza commented 5 years ago

@johng-cn why complicated ? configuration files are intuitive but the idea here is, what is used, when using go based frameworks in production

personally, in php I use nginx / php-fpm, so I use systemd in ruby and python word but I wish to use what is used in community behind the language

gqcn commented 5 years ago

@waghanza The complicated might be my personal opinion, I always just want a simple command nohup ./gf & to fit my demand.

waghanza commented 5 years ago

@johng-cn you're write, it's a personal matter, but I think that systemctl (or any tool like this) will avoid some errors, for example because systemctl could take care (restart / reload) the app

I take inspiration from https://beego.me/docs/deploy/supervisor.md

gqcn commented 5 years ago

@waghanza If the compiled standalone go binary supports start/stop/restart/reload(etc.) command options, the best choice surely is systemd/service.

gqcn commented 5 years ago

@waghanza Yes, we also used supervisor a lot, it can stay a process running background(if the process exits, supervisor will start a new one), and it's quite simple.

waghanza commented 5 years ago

@johng-cn I finally run a benchmark on go based frameworks, and there is the results :tada:

@kataras This might interest you

Language (Runtime) Framework (Middleware) Average 50th percentile 90th percentile 99th percentile 99.9th percentile Standard deviation
go (1.11) iris (11.1) 16.58 ms 15.72 ms 18.63 ms 22.54 ms 307.28 ms 11009.67 crystal/kemal/server
go (1.11) chi (3.3) 16.72 ms 15.89 ms 18.91 ms 28.84 ms 283.01 ms 9655.67
go (1.11) beego (1.11) 17.01 ms 16.00 ms 19.06 ms 28.37 ms 337.62 ms 11749.33
go (1.11) muxie (1.0) 17.38 ms 16.21 ms 19.64 ms 27.39 ms 426.39 ms 12453.67
go (1.11) fasthttprouter (0.1) 17.48 ms 16.35 ms 19.45 ms 33.54 ms 302.09 ms 11220.67
go (1.11) gf (1.2) 20.02 ms 17.54 ms 22.61 ms 104.20 ms 302.25 ms 17683.67
go (1.11) gorilla-mux (1.6) 19.19 ms 17.07 ms 24.19 ms 42.53 ms 300.24 ms 13792.33
go (1.11) echo (3.3) 27.70 ms 21.52 ms 31.70 ms 203.95 ms 360.03 ms 28702.33
go (1.11) gin (1.3) 30.48 ms 23.44 ms 32.66 ms 217.21 ms 502.13 ms 32082.33

Hardware (sieger and targets)

Sieger (https://github.com/wg/wrk)

Targets (each frameworks)

Network

@the-benchmarker/web-frameworks Let me know if any other info could be useful

gqcn commented 5 years ago

@waghanza It looks really nice 😄 .

kataras commented 5 years ago

Awesome @waghanza!

waghanza commented 5 years ago
Language (Runtime) Framework (Middleware) Average 50th percentile 90th percentile 99th percentile 99.9th percentile Standard deviation
go (1.11) beego (1.11) 48.92 ms 15.00 ms 70.81 ms 770.70 ms 1799.36 ms 136776.00
go (1.11) echo (3.3) 40.04 ms 15.61 ms 52.79 ms 629.62 ms 997.28 ms 105541.00
go (1.11) iris (11.1) 38.19 ms 15.62 ms 33.66 ms 645.67 ms 1203.42 ms 106510.00
go (1.11) muxie (1.0) 40.87 ms 15.28 ms 33.30 ms 632.58 ms 883.49 ms 107002.67
go (1.11) gf (1.2) 29.72 ms 15.93 ms 23.55 ms 416.54 ms 848.74 ms 67943.00
go (1.11) gin (1.3) 39.92 ms 15.27 ms 51.26 ms 643.05 ms 1500.39 ms 109189.33
go (1.11) gorilla-mux (1.6) 39.11 ms 15.10 ms 52.26 ms 625.44 ms 1638.02 ms 109363.67
go (1.11) fasthttprouter (0.1) 47.98 ms 15.25 ms 72.85 ms 776.05 ms 3347.11 ms 161084.33
go (1.11) chi (3.3) 38.76 ms 15.36 ms 22.14 ms 615.58 ms 978.20 ms 103353.67
rust (1.30) gotham (0.3) 28.23 ms 15.69 ms 22.42 ms 371.85 ms 960.00 ms 66043.67

Hardware (sieger and targets)

Sieger (https://github.com/wg/wrk)

Targets (each frameworks)

Network

waghanza commented 5 years ago
Language (Runtime) Framework (Middleware) Average 50th percentile 90th percentile 99th percentile 99.9th percentile Standard deviation
go (1.11) echo (3.3) 22.18 ms 20.99 ms 23.93 ms 33.84 ms 418.25 ms 14012.33
go (1.11) fasthttprouter (0.1) 21.84 ms 21.01 ms 23.90 ms 28.47 ms 280.69 ms 11572.67
rust (1.30) actix-web (0.7) 22.33 ms 21.10 ms 24.10 ms 58.97 ms 416.30 ms 15489.33
go (1.11) gorilla-mux (1.6) 22.46 ms 21.39 ms 24.37 ms 32.87 ms 335.86 ms 13107.00
go (1.11) muxie (1.0) 22.30 ms 21.40 ms 24.51 ms 29.40 ms 327.07 ms 12216.33
go (1.11) chi (3.3) 22.09 ms 21.40 ms 24.48 ms 29.21 ms 317.96 ms 10870.33
go (1.11) iris (11.1) 22.43 ms 21.46 ms 24.43 ms 31.56 ms 352.99 ms 13198.33
go (1.11) gin (1.3) 23.10 ms 21.62 ms 24.88 ms 51.03 ms 451.92 ms 14115.67
go (1.11) beego (1.11) 22.42 ms 21.66 ms 24.75 ms 29.04 ms 314.15 ms 10479.67
rust (1.30) gotham (0.3) 22.58 ms 21.79 ms 24.86 ms 29.72 ms 305.90 ms 11448.33
rust (nightly) rocket (0.3) 47.36 ms 23.42 ms 36.60 ms 497.96 ms 2531.89 ms 115728.67

Hardware (sieger and targets)

Sieger (https://github.com/wg/wrk)

Targets (each frameworks)

Network

noelzubin commented 4 years ago

How different is this so called 'docker noise' and the performance on an non container machine ?

waghanza commented 4 years ago

Hi @noelzubin,

The noise gap could be about 40%, but it is not so-easy to measure this impact.

BTW, this branch is quite outdated and will be totally dropper / recreated when ... I have the time :heart: