akka / akka

A platform to build and run apps that are elastic, agile, and resilient. SDK, libraries, and hosted environments.
https://doc.akka.io
Other
13.06k stars 3.59k forks source link

Feature request:Adaptive rate limiter #26010

Open He-Pin opened 6 years ago

He-Pin commented 6 years ago

Maybe this should be part of the Alpkka? What I expected is something like concurrency-limits https://medium.com/@NetflixTechBlog/performance-under-load-3e6fa9a60581, which integrated with Akka-Stream.

chbatey commented 6 years ago

Definitely a great idea, we've discussed the need for it multiple times. Most recently when doing akka-grpc. Congestion control like mechanisms for external calls that back pressure into Akka Streams would be :+1:

It would be a significant piece of work hence why it hasn't made it to the top of the priority list yet.

He-Pin commented 6 years ago

@chbatey Yes, it should adaptive to meet both latency and error rate both are essential to meet some business object. Thinking about a consumer which continually consuming messages from a message queue but should do its best to not drop any messages or process failure, but a hard limit will not suitable here, because we may want the system to achieve a nice TPS.

He-Pin commented 5 years ago

refs: RateLimiter patterns: https://github.com/akka/akka/issues/24879

andreas-schroeder commented 4 years ago

Hey @hepin1989, I've build an adapter for Netflix' adaptive concurrency limits for Akka Http here: https://github.com/andreas-schroeder/akka-http-concurrency-limits maybe that would be something you could use, or draw inspiration from? (Sorry for hijacking this old issue, in case I did...)

jrudolph commented 4 years ago

Thanks for sharing @andreas-schroeder. I wonder how well does that work in practice? In particular, it seems in the latency signal from request handling time is much harder to separate queuing latency from essential latency than for an RTT in a network path. Between two IP endpoints there is "only packet transport infrastructure" while between request and response there might be all kinds of processing behavior that might depend on all kinds of variables non-linearly.

How can you make accurate estimates about the cost of handling requests? It seems much too easy to estimate the cost of a request wrong by orders of magnitudes, will those algorithms still work reliably enough?

andreas-schroeder commented 4 years ago

Hi @jrudolph, good question. I can't answer this with full confidence as I myself only began to experiment with the Netflix-provided adaptive rate limiting algorithms. From my (limited) experience, I can tell that I found them to be more erring on the conservative (i.e. permissive) side than throttling aggressively. Still, it might just be that my tests were off. In terms of non-linearity, I had my concerns as well: if requests are very heterogeneous in terms of processing cost, would they throw off the algorithms? Which is why I introduced the possibility to specify weight to requests so that the user can provide the cost estimation, e.g. based on the number of elements requested in a batch request. I plan to do a couple more tests on this, and will keep you updated once I have more data on whether this feature is actually necessary.