helidon-io / helidon

Java libraries for writing microservices
https://helidon.io
Apache License 2.0
3.52k stars 565 forks source link

Provide adaptive concurrency limits #8897

Closed arouel closed 3 weeks ago

arouel commented 4 months ago

Why

There may be several reasons why adaptive concurrency limiting is preferred over using a fixed limit:

  1. Dynamic System Conditions: In a distributed system, conditions such as load, resource availability, and topology can change frequently due to factors like auto-scaling, partial outages, code deployments, or fluctuations in traffic patterns. A fixed concurrency limit cannot adapt to these dynamic conditions, leading to either under-utilization of resources or overwhelmed services.

  2. Latency Sensitivity: Different services or use cases may have varying sensitivity to latency. A fixed concurrency limit cannot account for these differences, potentially leading to either excessive queuing and high latency or under-utilization of resources. An adaptive approach can adjust the limit based on observed latencies, maintaining desired performance characteristics.

  3. Simplicity and Autonomy: Manually determining and configuring fixed concurrency limits for every service or instance can be a complex and error-prone process, especially in large-scale distributed systems. An adaptive approach can autonomously and continuously adjust the limit without manual intervention, simplifying operations and reducing the risk of misconfiguration.

  4. Resilience and Self-Healing: By automatically adjusting the concurrency limit based on observed conditions, an adaptive approach promotes resilience and self-healing capabilities. It allows services to shed excessive load during periods of high demand or resource constraints, preventing cascading failures and promoting graceful degradation.

While a fixed concurrency limit may be easier to reason about and configure initially, it lacks the flexibility and adaptability required in modern, dynamic distributed systems. An adaptive approach provides the ability to continuously optimize performance, resource utilization, and resilience in the face of changing conditions, ultimately leading to a more robust and efficient system.

Suggestion

Ideally, a user would be able to describe the limiting algorithm in the [ListenerConfig](https://helidon.io/docs/v4/apidocs/io.helidon.webserver/io/helidon/webserver/ListenerConfig.html#maxConcurrentRequests()) that fit their needs instead of a fixed number for maxConcurrentRequests. The Limit and Limiter interfaces from Netflix's concurrency limits library are a good starting point. In the first iteration we should provide the following implementations

Instead of passing a Semaphore for requests in the ServerListener to the ConnectionHandler we would pass a Limiter implementation that holds the configured Limit algorithm. The Limiter would be used instead if the Semaphore to acquire a token per request. If no token can be acquired the limit is exceeded and the request can be rejected.

While implementing a Proof of Concept (PoC), I asked myself where do we want to place the Limiting API. I guess, we need a new submodule concurrency-limits which holds Limit and Limiter interfaces and a standard set of implementations. The webserver module then depends on concurrency-limits.

Another question is, how do we want to make the various limiting algorithm configurable. Today, we have just the single property maxConcurrentRequests, but in future we want to choose from a set of different implementations, e.g. no limit, fixed limit, AMID limit, Vegas limit etc.

When testing the PoC, I noticed that when the access log feature is activated, rejected requests are not logged in the access log file. Is this behavior intentional or is this a bug?

Additionally, extending the metrics (looking at KeyPerformanceIndicatorMetricsImpls) would be helpful, to be able to observe how a service is doing. I'm thinking here about the following request limiting metrics:

romain-grecourt commented 4 months ago

@spericas @tomas-langer @danielkec FYI.

tomas-langer commented 4 months ago

Hello, this sounds like a great idea. I will provide a few answer for questions you posted:

Some other thoughts:

LateshDulani commented 2 months ago

+1 for this change. After helidon4 upgrade we are facing similar issue under load. Issue is that if we keep max concurrent requests to unlimited then under high sustained load server fails to process requests because server tries to process more requests then it can and requests couldn’t get db connections and eventually too many requests fails. To mitigate this issue we limited concurrent requests limit to X-5 because we have X db connections max in pool. This resolved issue with sustained load but it introduced another problem that if there is a sudden burst of requests we can process X number of request and other request fails with 503. Ideally we would want to limit concurrent request processing based on our resource limitation (db connections, memory etc) but do not want additional requests to fail (at least not right away) in case of burst.

vasanth-bhat commented 2 months ago

For the Fixed limit scenario ( max-concurrent-requests ), can there be an option to also enable queueing with a configurable queue size? The default behavior can still be the same , on a need basis the services can enable queueing with limits on queue size whenever fixed limit is used. That would avoid requests failing with 503 when here is occasional burst. This will make sure that the larger behavior is compatible with H3, H2 where requests got queued , while waiting for threads to become available in Server threadpool

While services can do this with BulkHead API, it would be good have some support in Helidon itself, which may work for many services.

created. - https://github.com/helidon-io/helidon/issues/9229 for providing option to enable queueing when "max-concurrent-requests" is configured . This can be near term solution to avoid requests failing during surge

lettrung99 commented 2 months ago

For the Fixed limit scenario ( max-concurrent-requests ), can there be an option to also enable queueing with a configurable queue size? The default behavior can still be the same , on a need basis the services can enable queueing with limits on queue size whenever fixed limit is used. That would avoid requests failing with 503 when here is occasional burst. This will make sure that the larger behavior is compatible with H3, H2 where requests got queued , while waiting for threads to become available in Server threadpool

While services can do this with BulkHead API, it would be good have some support in Helidon itself, which may work for many services.

created. - #9229 for providing option to enable queueing when "max-concurrent-requests" is configured . This can be near term solution to avoid requests failing during surge

Plus 1, can we get some support for short term solution on #9229?

barchetta commented 2 months ago

Just dropping this here for reference:

The Fault Tolerance Bulkhead feature (SE, MP) provides a mechanism for (non-adaptive) rate-limiting access to specific tasks. You control both parallelism and wait-queue length.

See the Helidon SE Rate Limiting example for examples of using a Bulkhead as well as a Java Semaphore for doing rate limiting.

lettrung99 commented 1 month ago

Hello,

I am also interested in the status of this Jira ticket. Could you please provide an estimated eta for the implementation of this fix?

Thank you for your assistance.

arouel commented 1 month ago

Just FYI, I made a proof of concept of adaptive concurrency limits for Helidon 4 (see https://github.com/arouel/helidon-contrib). Maybe this is helpful to you. Any feedback is welcome.

tomas-langer commented 1 month ago

I have created a PR based on @arouel proposal, refactored a bit to use approach aligned with Helidon Fault Tolerance. See #9295 for details - both on how it would be configured and how it is implemented. Please provide feedback!