Open pbuz opened 4 years ago
Thanks. We'll take a look.
Hi @manuelcueto, do you have any updates on this?
Some thoughts on it:
private def receiveWithQueue(queue: ScheduleQueue): Receive = {
case Trigger(scheduleId, schedule) =>
queue.offer((scheduleId, messageFrom(schedule))) onComplete {
case Success(QueueOfferResult.Enqueued) =>
log.debug(ScheduleQueueOfferResult(scheduleId, QueueOfferResult.Enqueued).show)
case Success(res) =>
log.warning(ScheduleQueueOfferResult(scheduleId, res).show)
case Failure(t) =>
log.error(t, s"Failed to enqueue $scheduleId")
self ! DownstreamFailure(t)
}
}
This is the conflicting piece of code, where we offer to the queue and we get a Future back which is the way akka handles backpressure. The future will not complete until the buffer can hold another element. since we're not waiting for the future to complete here, if we call offer
again while the buffer it's full, the queue will fail and we're currently not handling that gracefully.
A solution would be to context become
to another receive
which will wait for completion while stashing incoming requests, and once it's completed unstash them and go back to the 'available' state
@manuelcueto we played a bit with the value of the scheduler.publisher.queue-buffer-size to see whether we can have this passing, but it is still failing for 500 messages/second.
have you tried setting the buffer size to max int? i believe thats how it is configured in MAP @manuelcueto
@pbuz could you post your deployment configuration here where you are seeing the issues?
We are trying to use KMS to schedule messages from a topic on which we produce 500 messages/second. Unfortunately our test fails while using KMS version 0.22.0 of the Docker image and the pod starts to restart and we can see this error pouring in the logs:
We would like to mention that we were able to run our test successfully for 250 messages/second.