Open jmcx opened 1 year ago
This is exactly what we are looking for right now with our SQS setup! Was bummed that there wasn't an equivalent to https://knative.dev/docs/serving/autoscaling/concurrency/ in the knative eventing framework
Found this rate limiter option here for cloud events: https://docs.triggermesh.io/1.25/sources/cloudevents/#configuring-rate-limiter-optional - but wasn't sure if that could apply to the SQS source https://docs.triggermesh.io/1.25/sources/awssqs/
and also found some discussion where people were leveraging a Kafka Broker with an SQS Source so that they can play around with Kafka parallelism configs (e.g. partitioning) to mimic concurrency settings - but seems much more complex
Is there any timeframe for when we could expect to see this feature implemented and available?
Use case
There are cases in which you need to control the maximum number of concurrent in-flight events being sent to a target.
For example, 1000 messages land on an SQS queue in one go. The SQS source consumes these messages as fast as it can. The goal is to deliver them to a Service, but the service can only consume at most 10 messages in parallel.
Solution
One idea is to implement a concurrency control on Triggers, such that I can specify the maximum number of non-acknowledged in-flight requests at any given time. Once the limit is reached, the Trigger waits for an event to either be acknowledged by the target or dropped before starting to deliver a new event. This would apply to MemoryBroker and RedisBroker, with different fault tolerance guarantees for each.
The benefit of this solution is that by being on the broker, it can be used regardless of which source connector you're using. But, it requires necessarily using a TriggerMesh broker.
An example of the Trigger configuration could be something like this, which includes the new maxConcurrency parameter that I've tentatively added to the delivery section, alongside retries and dead-letter sink:
An alternative solution would be to implement this on the source connectors themselves, or both, but this is more costly as it requires updates to all source components to reach a consistent feature set.