Open berezovskyi opened 3 years ago
The simplest least complicated idea I have is to switch to a new client like https://square.github.io/okhttp/ that has all the async/retry logic done for us.
@DavidJHoney would be good to hear your thoughts on this, I am making a record here because you mentioned resource pool exhaustion.
Hi Andrew,
Some services, such as creating baselines, might be long running operations and return 202 Accepted with a Location header. See the OSLC Configuration Management spec. Callers would want to know the URIs of the oslc_config:Activity resources for each POST so they can poll them for progress and completion status.
One technique I have seen uses multi-part requests, and the response is a multi-part response body that gives the response for each part of the request.
Best regards, David.
One design consideration is to limit the number of concurrent HTTP requests. Many web servers have a finite sized connection pool. Having a single request spawn multiple concurrent HTTP requests to such a server could reduce the server’s ability to process other HTTP requests, and in the extreme, exhaust the connection pool as if it was a DOS attack.
For OSLC discovery, I recent made a change to some Rational ELM code so that a discovery request could perform concurrent GETs of service providers, but used a fixed size threadpool to limit the number of concurrent requests to 5. This improves performance over a sequential implementation while balancing the risk of consuming too many connections from the connection pool.
I’d be cautious about suggesting or supporting unbounded concurrent HTTP requests spawned from a single request.
Thanks David! Will keep 202 Accepted in mind when I read the Config primer and the spec. W.r.t. the concurrency, it's not our problem but the server implementers (e.g. use nginx to have a backend concurrency of 10 requests and queue the rest) and clients (adjust the concurrency). My code uses https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines/-dispatchers/-i-o.html that makes up to max(64, vcpus)
requests at a time by default unless told otherwise.
Just a note for the future, such a client would need to deal gracefully with the transient faults like:
During my work on OSLC 2020 Reference Implementation (by the way, it's ready for all to test and play around with), I wrote a Kotlin coroutine wrapper around Lyo Client to POST many resources concurrently (see the code; it achieves around 250 successful (HTTP 200/201) POSTs per second if your server can handle it.
I think we should have something supported natively:
I want to have something included but also don't want to start something too grand. Any ideas? I would also like not to require Java version upgrade, so anything that requires JDK9+ shall be either avoided or developed as a separate Maven package.