Closed kmjung closed 7 years ago
I am strongly against this change. Folks that block xenon threads will sooner or later cause issues no matter executor you use. By keeping the behavior as is, we at least create natural push back against remote clients, preventing further HTTP requests from processed. So the existing behavior is by design, it throttles, and should be left as is.
you can add stats to detect blocking calls but do not make this change
@georgechrysanthakopoulos I have not been able to find a way to collect queue depth stats here. Netty exposes the queue associated with the executor service, but the executor tasks have no relation to the set of I/Os pending service -- the threads (appear to) run a single task loop work item so long as there are I/Os to process, and so the number of pending I/Os is not available.
When you say "add stats to detect blocking calls", what are you imagining? Some kind of histogram for completion processing duration?
This change modifies the HTTP service client to perform HTTP response completions on the executor service associated with the ServiceHost or the executor service, rather than calling the completion directly from the Netty event loop thread. A JVM flag allows Xenon consumers to restore the original behavior.
Calling completions directly from the Netty thread is more performant -- throughput is higher and latency is lower -- but this design also allows callers to clog up the event loop by making blocking calls from within completion handlers on this thread, leading to hard-to-diagnose error conditions and performance degradation.
Change-Id: I5f6b2897cad4efd29ad031b39e78f33524a91650
Description
Tracker link