Open bvinc opened 6 years ago
I believe this is a bug in hyper, with the default dispatcher that it uses. hyper has a new dispatcher in progress, that should shutdown the connection task properly, but it's not quite yet set as the default, and I haven't set it in use in reqwest in case more bugs have been found.
I believe the bugs have been cleaned up, and when it is fixed in hyper, this should be fixed too, hopefully.
Just an update. I tested it recently and I'm still seeing the same behavior in reqwest 0.9.2.
Thanks for the update! Yea, looks like there's two issues here:
Error
is returned.Fixing the first issue should be simple (running tests now). I haven't looked yet at what would be required to fix the second.
The first part should be fixed with https://github.com/seanmonstar/reqwest/commit/512b80a3ada48a3c60328fc39a7b87a506a94acf.
It shouldn't be too hard to make the timeout part aware of writing a request body, but I think the semantics are a little blurry. Currently the timeout is more like a deadline. It probably would be a good idea to allow more fine-grained timeout options, like write timeout, response timeout, read timeout, and perhaps deadline?
I'm not sure. I'm not sure many people would want a different read and write timeout, but the option might be nice.
Does a response timeout make sense with HTTP 2, when you could be waiting on multiple requests?
Another thought I had was that some of this is duplicated in TCP options, but I always found the default TCP options kind of strange. It starts sending TCP keepalives after 2 hours of a connection being idle, every 75 seconds, after 9 tries, it gives up. Maybe these are just checks meant for long-running idle TCP connections, which isn't something that you would expect from an HTTP connection. But if I wanted a timeout, I could always just set these TCP options.
Any update on this issue? I think the most common use case is the connect timeout, the read/write operations are rarely needed to have a fine grained timeout control.
@dremon sorry, what do you mean? That all we need is to add a connect timeout option?
Yeah, I think the timeout property should be used only for initial connect, may be renamed to "connect_timeout". The global timeout for all operations doesn't make sense at all.
For example Jetty HTTP client has 3 options: connect timeout, idle timeout (e.g. no traffic), dns resolution timeout.
In addition each Request instance can have a total request/response timeout (that's similar to current reqwest implementation) but that's rarely needed.
I am wondering would it be a huge effort to implement something similar in reqwest.
I believe (I can be convinced I'm wrong) that having a default complete timeout is a Good Idea. For instance, this post resonates with my experience: https://medium.com/@nate510/don-t-use-go-s-default-http-client-4804cb19f779
Still, I believe adding a connect timeout should be pretty straightforward.
I think the described issue (in the linked article) is more about idle timeout rather than global one for request/response cycle, if there is no data traffic it makes sense to cancel it indeed, otherwise terminating a slow but active transfer after some time is rather a corner case.
What if I want to transfer a data of various sizes including very big ones over a (potentially) slow line, it's hard to recommend any particular timeout for that.
I agree that timing out by default is good, but it should be configurable.
When you watch resources in the Kubernetes API it's possible to have very long periods of time without any data being sent. I don't want the request to time out at all in that situation.
Just to add my 10¢, an overall timeout can be very useful when the request characteristics are well known and/or consistent but a lot less so when they vary significantly (ex. downloading files of wildly different sizes from clients on cellular networks vs. fibre connections). In that situation, we want to be sure requests are still making progress but can't estimate ahead of time how long they should take overall.
I would also like to have read and write timeouts instead of a request timeout. In tokio this can be achieved with https://github.com/sfackler/tokio-io-timeout/
To reproduce this issue:
If you turn on RUST_LOG=trace, you'll see that the request is still transferring data when it returns that it timed out. You'll also see that the request continues to transfer data after it returns.
I would expect that the timeout resets every time data has been transfers and only happens after 30 seconds of data not being transferred.