Closed kassane closed 10 months ago
That's not a crash, it's a message telling you a connection request was dropped because the pool was exhausted.
The demo server is started with a mere 2 pre-configured connections, a max of 10 and a timeout of 5 seconds. You're opening 400 connections which each try to keep the connection open for 10 seconds. Seeing rejected connections is expected.
var server = try httpz.Server().init(allocator, .{.pool = .{.min = 2, .max = 10, .timeout = 5000}});
You could make max = 400
. You could or remove the max
setting to make it unlimited. You could increase the timeout to 15 seconds. Setting a higher min
would help the benchmark too.
There's a lot of work that could go into the library to make it more robust with respect to keep alive and timeouts. I've been waiting for async to arrive, since I expect it to change the internal API a lot. Also, I expect most people are going to front this with a reverse proxy, like nginx, for the TLS termination. Such reverse proxies are much better equipped at creating sane timeouts.
That's not a crash, it's a message telling you a connection request was dropped because the pool was exhausted.
The demo server is started with a mere 2 pre-configured connections, a max of 10 and a timeout of 5 seconds. You're opening 400 connections which each try to keep the connection open for 10 seconds. Seeing rejected connections is expected.
var server = try httpz.Server().init(allocator, .{.pool = .{.min = 2, .max = 10, .timeout = 5000}});
You could make
max = 400
. You could or remove themax
setting to make it unlimited. You could increase the timeout to 15 seconds. Setting a highermin
would help the benchmark too.
Really, it was my mistake. I've removed the maximum value in example/simple.zig
and problem hasn't persisted.
Running 10s test @ http://127.0.0.1:5882
4 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 573.16us 398.30us 28.98ms 99.81%
Req/Sec 90.10k 1.71k 93.76k 75.25%
3550709 requests in 10.05s, 1.63GB read
Requests/sec: 353272.36
Transfer/sec: 166.10MB
There's a lot of work that could go into the library to make it more robust with respect to keep alive and timeouts. I've been waiting for async to arrive, since I expect it to change the internal API a lot. Also, I expect most people are going to front this with a reverse proxy, like nginx, for the TLS termination. Such reverse proxies are much better equipped at creating sane timeouts.
Until now I've liked your library for its readability and performance, in contrast with what happened using boost::beast
(requires boost::asio
) which is very complex involving awaitable<T>
(w/ c++20 coro) and completion-token
, giving me certain drawbacks...
Although, zig (self-hosting) doesn't yet have stackless coroutines, my plan is to try coupling it with libxev.
A brief test attempt was made on the basis of the zap project. Although your project is more idiomatic for me. However, I don't see an example that is suitable for http loader testing.
Would it be possible to show a simple example demonstrating sufficient capacity for benchmarking?
Testing