Open ShadowJonathan opened 2 years ago
Is there some workaround to ensure that the amount of TCP connections per host is never over a certain limit? Amount of requests-in-progess could be anything, because HTTP 1.1 and HTTP 2 allow multiplexing, and the only issue here are the connections, right? @seanmonstar seeking your wisdom please.
HTTP 1.1 and HTTP 2 allow multiplexing
Only HTTP/2 can multiplex.
Is there some workaround
Not that I'm aware off. This may be easier once hyper moves its pool into various pieces to live in hyper-util
.
Only HTTP/2 can multiplex.
Yeah, I guess it's better not to mix the terminology. I was trying to say that even though HTTP 1.1 pipelining isn't as convenient to use as HTTP 2 multiplexing, it still allows concurrent requests over the same connection.
This may be easier once hyper moves its pool
Good to know, thanks!
This is kinda coming from https://github.com/seanmonstar/reqwest/issues/386, and probably/mainly also a request for hyper; I want to be able to limit the amount of requests that reqwest is making, concurrently, per host/IP.
The reasons for this is similar as in #386; too many requests are made at once, but unlike that issue, this is on a large process scale, where adding a semaphore to limit requests from all over the client would increase complexity a bit.
However, coming from python, this seemed like a "solved problem" for me; limit the amount of connections per host to a sane amount (10, or 20). Chrome does this as well (6 per host, with advantages gained by using HTTP2), however, it seems that reqwest and hyper do not limit themselves.
Even if opt-in, I'd like for there to be an option to limit the amount of pooled connections per host, to make hitting that "too many open files" error impossible.