karlseguin / http.zig

An HTTP/1.1 server for zig
MIT License
454 stars 31 forks source link

Example crash during wrk test #19

Closed kassane closed 10 months ago

kassane commented 10 months ago

A brief test attempt was made on the basis of the zap project. Although your project is more idiomatic for me. However, I don't see an example that is suitable for http loader testing.

Would it be possible to show a simple example demonstrating sufficient capacity for benchmarking?

Testing

./zig-out/bin/http.zig\ demo
error: http.zig: failed to acquire request and response object from the pool error.ConnectionPoolExhausted
error: http.zig: failed to acquire request and response object from the pool error.ConnectionPoolExhausted
error: http.zig: failed to acquire request and response object from the pool error.ConnectionPoolExhausted
error: http.zig: failed to acquire request and response object from the pool error.ConnectionPoolExhausted
error: http.zig: failed to acquire request and response object from the pool error.ConnectionPoolExhausted
error: http.zig: failed to acquire request and response object from the pool error.ConnectionPoolExhausted
wrk -t4 -c400 -d10s http://127.0.0.1:5882                   
Running 10s test @ http://127.0.0.1:5882
  4 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    34.30us   17.27us   3.86ms   86.09%
    Req/Sec    98.82k    14.97k  112.58k    97.55%
  2004992 requests in 10.10s, 0.92GB read
  Socket errors: connect 0, read 494, write 222, timeout 716
  Non-2xx or 3xx responses: 716
Requests/sec: 198587.83
Transfer/sec:     93.34MB
karlseguin commented 10 months ago

That's not a crash, it's a message telling you a connection request was dropped because the pool was exhausted.

The demo server is started with a mere 2 pre-configured connections, a max of 10 and a timeout of 5 seconds. You're opening 400 connections which each try to keep the connection open for 10 seconds. Seeing rejected connections is expected.

var server = try httpz.Server().init(allocator, .{.pool = .{.min = 2, .max = 10, .timeout = 5000}});

You could make max = 400. You could or remove the max setting to make it unlimited. You could increase the timeout to 15 seconds. Setting a higher min would help the benchmark too.

There's a lot of work that could go into the library to make it more robust with respect to keep alive and timeouts. I've been waiting for async to arrive, since I expect it to change the internal API a lot. Also, I expect most people are going to front this with a reverse proxy, like nginx, for the TLS termination. Such reverse proxies are much better equipped at creating sane timeouts.

kassane commented 10 months ago

That's not a crash, it's a message telling you a connection request was dropped because the pool was exhausted.

The demo server is started with a mere 2 pre-configured connections, a max of 10 and a timeout of 5 seconds. You're opening 400 connections which each try to keep the connection open for 10 seconds. Seeing rejected connections is expected.

var server = try httpz.Server().init(allocator, .{.pool = .{.min = 2, .max = 10, .timeout = 5000}});

You could make max = 400. You could or remove the max setting to make it unlimited. You could increase the timeout to 15 seconds. Setting a higher min would help the benchmark too.

Really, it was my mistake. I've removed the maximum value in example/simple.zig and problem hasn't persisted.

Running 10s test @ http://127.0.0.1:5882
  4 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   573.16us  398.30us  28.98ms   99.81%
    Req/Sec    90.10k     1.71k   93.76k    75.25%
  3550709 requests in 10.05s, 1.63GB read
Requests/sec: 353272.36
Transfer/sec:    166.10MB

There's a lot of work that could go into the library to make it more robust with respect to keep alive and timeouts. I've been waiting for async to arrive, since I expect it to change the internal API a lot. Also, I expect most people are going to front this with a reverse proxy, like nginx, for the TLS termination. Such reverse proxies are much better equipped at creating sane timeouts.

Until now I've liked your library for its readability and performance, in contrast with what happened using boost::beast (requires boost::asio) which is very complex involving awaitable<T> (w/ c++20 coro) and completion-token, giving me certain drawbacks...

Although, zig (self-hosting) doesn't yet have stackless coroutines, my plan is to try coupling it with libxev.