Closed ensc closed 2 years ago
Same issue here ! If all the workers are busy all other incoming request receive 408.
This is a big issue for me because i generate image that can take up to 10sc so with the default settings i can only serve 8 clients (because default setting is set to 8 workers), all the other requests got timeout.
Setting client_timeout(0)
doesn't fix the issue :cry:
I create a repo where you can find an exemple with benchmark -> https://github.com/Farkal/actix-test-timeout
So does it happen because the request is blocking the worker, what about implement that work as a future?
I tried but doesn't work. I have finally fixed it using web::block. My repo have been updated with the solution. I think we could add some explanation to the doc to prevent block the actix thread because lot of people will just see the 408 and may have trouble to understand that the main thread is busy.
Well if all the requests are handled to 1 actor's mailbox per worker it makes sense because of the nature of actors handling messages by order but I guess we should research the request process internal further to fully understand what is actually happening there
@Farkal If I'm not mistaken web::block is exactly that (implementing that work as a future, with that future just being a thread pool). What you did was essentially move the work out of the thread handling the request, freeing it to handle more requests...
@vmalloc Yeah but first i thought that actix pop new thread for each new request. That was a mistake from my understanding of the actix server. I just think it could be more explicit in the doc.
Yes I agree. The runtime behavior of actix should appear somewhere, like how extractors are run and what’s the blocking behavior. I can try to write a PR for it.
Where in the docs do you think this can appear so that it makes sense?
I think it should be on the server page, at least in the multi-threading section https://actix.rs/docs/server/ Thank you for the PR :blush:
@Farkal I created a PR adding some information. Currently awaiting some feedback from the community: https://github.com/actix/actix-website/pull/150
@vmalloc Seems great ! Thanks !
As correct usage is documented now can we close issue?
I do not think that this is only a documentation issue and imo the documentation fix is incomplete. Here, a single request takes <50ms (some database queries) and it is to expensive to convert this to a future. It can happen that there are 50 parallel requests which make the requests to take longer (e..g 500ms) and trigger the seen timeout.
It would be ok, when the client generates the timeout because the response has not been seen within 5s.
But it is bad that actix-web generates this error because it erroneously thinks that the client did not respond within 5s.
My dirty hack was to increase client_timeout()
blindly. But meaning of this function is
Defines a timeout for reading client request header. If a client does not transmit the entire set headers within this time, the request is terminated with the 408 (Request Time-out) error.
and it might be useful to limit the client request time in some cases.
At least the client_timeout()
documentation should be changed to explain this. It might be even useful to rename this method to match the actual behaviour (e.g. to handshake_timeout()
).
But I would rather see that the timeout starts ticking after the server waits for data from client.
hi @ensc, not sure I got about 50 parallel requests and why it is expensive to convert to a future? Parallel requests processing that's exactly the case why you would need async, otherwise you might just run 1000 workers in parallel but that will be more expensive. Running blocking work in a separate thread pool is a common pattern across the whole async Rust ecosystem, AFAIK there is no alternative to that. It is cheaper to run blocking code from a dedicated thread pool, than starting new threads for every new blocking request. It is also better on performance than waiting until your blocking request would complete. Sending data between threads also takes significantly less than a network packet transmission. That the right way to do it and would unlock workers to read data from incoming requests while waiting for response from external resource.
Consider that you have external queries which take 10ms on average. While worker is locked all other connection requests just queue up, though If you unlock worker it will be able to poll on other requests sending those requests in parallel. Now if you get one exceptional request which would block for 5 seconds all other requests will just halt hanging client request.
why it is expensive to convert to a future
afais diesel
(which I am using) does not support async i/o and I fear that porting to it (when available) would require a large code rewrite (additional Send
requirements, many ugly Arc
hacks). Rust does not seem to support async closures yet which are required e.g. for transaction (not only) in diesel.
Having client_timeout()
to work as documented would help to solve practical issues.
using diesel in the correct way is as simple as wrapping calls in web::block
(and .await
ing) to run then on a background task queue and keeping the HTTP workers free to accept new connections
For large queries this might be the correct way. But for short queries (perhaps <50ms) it is just overkill and will make the normal case slower resp. opens vectors for DDOS attacks.
Again, having client_timeout()
to work as documented (and expected by its naming) would be a better fix.
... and putting SQL queries into web::block
would mean at the end, that every HTTP request is executed in an own thread because there is almost every time some session information which is read/updated
UPDATE: although off-topic for this issue, some numbers:
Query ran in 0.6 ms: SELECT 1
Query ran in 0.2 ms: BEGIN
Query ran in 0.7 ms: SELECT "session"."id", "session"."token",
Query ran in 0.9 ms: UPDATE "session" SET "expires" = LEAST("s
Query ran in 12.4 ms: COMMIT
Query ran in 0.4 ms: SELECT "session_setup"."id", "session_set
Query ran in 4.3 ms: UPDATE "session_info" SET "dl_get_tot" =
Query ran in 0.4 ms: SELECT "project_ref"."id", "project_ref".
Query ran in 2.6 ms: SELECT "upload"."id", "upload"."file", "u
Query ran in 0.4 ms: SELECT "file_data"."id", "file_data"."fsi
Query ran in 0.4 ms: SELECT "upload"."id", "upload"."package",
Query ran in 0.2 ms: BEGIN
Query ran in 0.8 ms: UPDATE "session_info" SET "dl_get_hit" =
Query ran in 0.8 ms: UPDATE "file" SET "rtime" = GREATEST(COAL
Query ran in 3.8 ms: COMMIT
Query ran in 0.5 ms: SELECT 1
Query ran in 0.2 ms: BEGIN
Query ran in 0.7 ms: SELECT "session"."id", "session"."token",
Query ran in 10.5 ms: UPDATE "session" SET "expires" = LEAST("
Query ran in 40.3 ms: COMMIT
Query ran in 0.3 ms: SELECT "session_setup"."id", "session_set
Query ran in 138.1 ms: UPDATE "session_info" SET "dl_get_tot"
Query ran in 0.3 ms: SELECT "project_ref"."id", "project_ref".
Query ran in 3.2 ms: SELECT "upload"."id", "upload"."file", "u
Query ran in 0.3 ms: SELECT "file_data"."id", "file_data"."fsi
Query ran in 0.3 ms: SELECT "upload"."id", "upload"."package",
Query ran in 0.2 ms: BEGIN
Query ran in 2.5 ms: UPDATE "session_info" SET "dl_get_hit" =
Query ran in 0.6 ms: UPDATE "file" SET "rtime" = GREATEST(COAL
Query ran in 3.5 ms: COMMIT
Where would you set the limit for web::block
?
You're going to run into these issues either way. When an operation does network I/O you cannot assume it will take <50ms no matter how simple the query or small the data-set. Using web::block is a well-tested, good and simple way to use these blocking calls in ways that don't result in HTTP worker overloading.
[sorry; updated my previous comment shortly be I saw your response...]
But web::block
will increase server load and reduce response time in normal case.
I can deal with longer response times for rare overload situations (clients will have their timeout just in case).
But it is bad that whole request fails in actix-web with 408 because client_timeout()
is triggered in the described way.
Yes this is indeed off topic, lets discuss this on the gitter chat and if needed we can reopen this issue later.
same issue here :( At least disable the client_timeout should avoid 408 timeouts but it doesn't work.
I believe there are other problems with this issue.
408
status code to the client when the server is busy. The RFC says:The 408 (Request Timeout) status code indicates that the server did not receive a complete request message within the time that it was prepared to wait. A server SHOULD send the "close" connection option (Section 6.1 of [RFC7230]) in the response, since 408 implies that the server has decided to close the connection rather than continue waiting. If the client has an outstanding request in transit, the client MAY repeat that request on a new connection.
Also, the RFC talks about the 4xx status codes like this:
The 4xx (Client Error) class of status code indicates that the client seems to have erred. [...]
I believe in this case the client has definitely not erred, sent the request in time, and waited for the answer to come back. I also believe the correct status code would be probably 503
:
The 503 (Service Unavailable) status code indicates that the server is currently unable to handle the request due to a temporary overload...
And therefore I believe the correct fix for this would be to separate the former client erred case from the latter where the developer(s) erred in the way they implemented the handlers.
Maybe this is just a theory but a server under high pressure - even with a correct handler implementation - could start throwing 408 errors because the two timers are not separated (client timeout vs server timeout), then these errors would be classified as client errors by the load balancers and therefore they would not remove the specific instance(s) from the fleet to give them some time to recover.
This status code put us in a completely wrong direction when we tried to trace down an issue with MeiliSearch since we thought that this was a client error and therefore the client codebase was wrong somewhere and not finishing the request sending in time, or somewhere an issue with the load balancer, when actually it was an issue on the server-side.
I am facing this same issue as well.
START flag can be insert earlier. Like don't set start on successful decode of request but rather set when the first read of stream is polled. and before that happen use client timeout as ka_timer.
I see this behaviour with a toy server doing no work, other than constructing the response, in the handler. As soon as I go other 60000 requests/s (on my machine, will vary on yours of course), I start to see 408 and also connection reset by peer. Maybe I am also running into some OS restriction, I don't know. (ulimit -n is ridiculously large, used several IPs on loopback device to not run out of ports, etc ...)
a minimal reproduction case will help us debug it when time is allocated to the issue
I'll see that I can provide something this evening CEST. But minimal here is still some stuff, you need to adjust ulimit, add IPs to loopback device and so on. If you are not willing to accept that as "minimal" please tell me, so I don't waste my time.
Here you go: https://github.com/typetetris/rust-web-reproducer
It is back. But I doubt it is really helpful.
fakeshadow on the actix discord said, that keep_alive
is misused as first request timeout per connection.
I don't know, wether that is still the case for actix-web 3.3.2
Hello :wave:,
Meilisearch is affected by this issue, using actix-web 4.3.1, as reported by users in https://github.com/meilisearch/meilisearch/issues/3653.
For convenience, I put together a "minimal" reproducer for the HTTP 408 that uses only actix-web and ureq to reproduce in a single binary, in https://github.com/dureuill/issue-3653-reproducer. By artificially blocking the worker threads, I notice that HTTP 408 statuses are returned to clients, even though I believe the clients did nothing incorrectly on their end.
My uninformed understanding of the expected behavior would be that incoming connections should be kept in a pending state when all worker threads are busy, or worst case if the server is overloaded it should answer with an HTTP 503. This is what https://github.com/actix/actix-web/issues/1269#issuecomment-764618150 upthread is suggesting as well, but I have no personal experience in writing an HTTP server so it could be different in practice.
I noticed that this issue is closed, so maybe there's a simple configuration fix I'm missing though. If that is the case, could you point me in the correct direction for what configuration to apply?
Thank you for reading and maintaining actix-web :heart:
I see occasional "408 Request Timeout" errors when the system is busy. Tracing it shows e.g.
on the actix-web server and
on the client (nginx proxy).
E.g. it appears the actix-web does:
Here, there are more than 5 seconds between 2. and 3. so that the timeout triggers.
I am not sure, why there are more than 5 seconds between 1 and 3; usually it are <10 ms but in cases like above it affects a cluster of connections.
But imo it would be better to start the timer only when there we are ready to handle the request (e.g. at the moment of the first
EPOLL_CTL_ADD
).actix-web 2.0.0 actix-rt 1.0.0 Linux 5.4.7