SaltyAom / bun-http-framework-benchmark

Compare throughput benchmark from various Bun HTTP framework
349 stars 30 forks source link

Sharing my bench stats. I think Windows might give you a lot of bias #48

Open icetbr opened 10 months ago

icetbr commented 10 months ago

Hi there, just sharing some data for comparison. My results are a bit closer to https://web-frameworks-benchmark.netlify.app/ and from https://fastify.dev/benchmarks/ then yours. I think Windows might be a huge bias.

Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz Linux Mint 21.2 kernel 5.15.0-82-generic

node v20.3.1 bun 1.0.0

Framework Get (/)
uws (node) 91,005.16
bunNoRouter (bun) 89,925.52
stricjs (bun) 88,766.38
elysia (bun) 86,841.69
hyper-express (node) 84,800.68
vixeny (bun) 84,216.08
hono (bun) 73,614.81
nhttp (bun) 72,285.46
bun-web-standard (bun) 67,617.68
baojs (bun) 56,594.58
vanillaNoRouter (node) 32,236.36
fastify (node) 28,463.80
express (node) 8,376.99

For reference, I added a simpler Bun

const text = 'Hello, Bench!';

export default {
  port: 3000,
  fetch() {
    return new Response(text);
  }
};

and a vanilla http

const http = require('http');

http.createServer((request, response) => {
    response.writeHead(200, {
        'Content-Type': 'text/plain'
    });

    response.write('Hi');
    response.end();
}).listen(3000);

You may close this whenever you wish ;-)

EDIT: my fastify result was wrong, I fixed it

aquapi commented 9 months ago

Good to see Stric still there

ItzDerock commented 9 months ago

i decided to give this a go on my linux machine and it seems like your hypothesis of Windows adding a bias is most likely true, though my results seem wildly different from yours and the original.

I'm running on Arch Linux (6.5.2-arch1-1) with node v18.17.1 (latest LTS) and bun v1.0.0. specs: i9-13900H @ 5.2GHz with 32GB DDR5 (6400MT/s)

Framework Average Get (/) Params, query & header Post JSON
bun (bun) 811,068 201,557.48 172,775.44 2,058,871.08
elysia (bun) 810,689.277 199,328.65 174,090.17 2,058,649.01
vixeny (bun) 807,709.063 202,805.12 179,064.42 2,041,257.65
oak (deno) 769,371.227 167,655.29 166,016.09 1,974,442.3
nhttp (bun) 760,450.853 198,492.94 158,600.42 1,924,259.2
abc (deno) 742,044.633 167,864.44 166,271.58 1,891,997.88
stricjs (bun) 740,831.373 201,589.74 184,249.6 1,836,654.78
acorn (deno) 739,010.19 164,892.01 162,335.82 1,889,802.74
uws (node) 734,322.513 226,374.87 211,539.77 1,765,052.9
hyper-express (node) 732,446.177 209,727.35 181,459.35 1,806,151.83
bun-web-standard (bun) 719,066.187 191,303.64 161,239.26 1,804,655.66
cheetah (deno) 717,651.5 163,672.6 163,822.32 1,825,459.58
hono (deno) 710,533.447 165,802.1 165,663.35 1,800,134.89
fastify (node) 708,772.593 105,254.87 97,829.15 1,923,233.76
hono (bun) 708,379.537 196,504.66 164,097.73 1,764,536.22
baojs (bun) 702,769.417 148,215.87 127,649.33 1,832,443.05
adonis (node) 702,456.737 168,147.99 165,628.37 1,773,593.85
fast (deno) 697,632.18 165,126.92 164,806.28 1,762,963.34
hyperbun (bun) 690,841.04 159,231.51 132,192.09 1,781,099.52
nbit (bun) 687,186.48 143,390.67 127,340.88 1,790,827.89
h3 (node) 657,640.83 95,482.43 74,931.17 1,802,508.89
express (bun) 651,074.947 52,466.33 47,152.22 1,853,606.29
koa (node) 649,649.55 66,286.44 63,136.92 1,819,525.29
hono (node) 635,528.963 20,781.76 19,828.52 1,865,976.61
hapi (node) 619,596.153 47,203.01 19,657.76 1,791,927.69
express (node) 617,255.07 20,499.82 19,872.63 1,811,392.76
nest (node) 604,784.157 19,265.87 17,871.22 1,777,215.38
icetbr commented 9 months ago

I think your Post JSON tests failed, that's why their number is so high

aquapi commented 9 months ago

@ItzDerock Post JSON cannot be faster than GET / You should rebench

ItzDerock commented 9 months ago

@ItzDerock Post JSON cannot be faster than GET / You should rebench

@aquapi

Thanks for pointing that out. After rerunning the benchmark, I noticed that the POST tests were reporting errors:

$ bombardier --fasthttp -c 500 -d 10s -m POST -H 'Content-Type: application/json' -f ./scripts/body.json http://localhost:3000/json
Bombarding http://localhost:3000/json for 10s using 500 connection(s)
[====================================================================================================================================================================================================] 10s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec   1815470.31   75734.29 2030540.86
  Latency      269.01us   175.66us    23.01ms
  HTTP codes:
    1xx - 0, 2xx - 0, 3xx - 0, 4xx - 0, 5xx - 0
    others - 18155558
  Errors:
    dial tcp: missing address - 18155558
  Throughput:       0.00/s

while the GET tests were running fine

$ bombardier --fasthttp -c 500 -d 10s http://localhost:3000/id/1?name=bun
Bombarding http://localhost:3000/id/1?name=bun for 10s using 500 connection(s)
[====================================================================================================================================================================================================] 10s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec    207543.91   39597.49  284050.48
  Latency        2.41ms     1.46ms   101.77ms
  HTTP codes:
    1xx - 0, 2xx - 2069081, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:    38.86MB/s

even after switching localhost to 127.0.0.1 (we should probably avoid using localhost anyways) I still get the strange dial tcp: missing address error.

$ bombardier --fasthttp -c 500 -d 10s -m POST -H 'Content-Type: application/json' -f ./scripts/body.json http://127.0.0.1:3000/json
Bombarding http://127.0.0.1:3000/json for 10s using 500 connection(s)
[===============================================================================================] 10s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec   1703773.22  151686.80 1904762.95
  Latency      286.93us   204.10us    28.77ms
  HTTP codes:
    1xx - 0, 2xx - 0, 3xx - 0, 4xx - 0, 5xx - 0
    others - 17038103
  Errors:
    dial tcp: missing address - 17038103
  Throughput:       0.00/s

I will probably create another issue about this so I don't continue to clog up this thread.

(and yes, I have double checked my /etc/hosts file and it contains the correct entries for my hostname and localhost)