jmalloc / ax

A message-driven application toolkit for Go. [EXPERIMENTAL]
MIT License
9 stars 3 forks source link

Implement some kind of delay between retries. #31

Closed jmalloc closed 6 years ago

jmalloc commented 6 years ago

This probably needs to be solved at the transport layer.

For RMQ this could perhaps be implemented using 2 DLX setups with a "retry" queue? There is already a DLX configuration on the pending queue, but it only serves to route rejected messages to the error queue.

jmalloc commented 6 years ago

I should have mentioned that I would prefer that this wasn't managed by the transport. The ideal would be to expand bus.Acknowledger.Retry() to accept a time.Duration that indicates how long before it should be retried, but I'm not sure how possible this will be with each transport implementation. Perhaps it could be a hint that the transport should follow if possible.

That way, we can allow the endpoint.RetryPolicy to provide a duration.

danilvpetrov commented 6 years ago

If still is decided to implement this feature at the transport layer, another option might be using Delayed Message Plugin in RMQ, though using something outside the regular RMQ package sounds inconsistent.

But going back to your last comment, can this feature be represented as an interface that transport may or may not implement?

And the last question that I have, are messages will be stored in the memory while they are being retried or persistent in the way that, say, outbox does it?

jmalloc commented 6 years ago

If still is decided to implement this feature at the transport layer, another option might be using Delayed Message Plugin in RMQ ...

I have looked at this in the past. It does have some limitations that makes me think we might be better off not using this, such as the lack of support for mandatory routing, which is used when sending a command.

I am also not super keen to use non-default plugins, as you say.

But going back to your last comment, can this feature be represented as an interface that transport may or may not implement?

Yes, we could definitely implement it that way; allow the transport to implement an additional optional interface. If the transport supports retry delays, use that directly, otherwise fall-back to some kind of DB-based approach.

And the last question that I have, are messages will be stored in the memory while they are being retried or persistent in the way that, say, outbox does it?

It could be done either way, depending on the transport and how we build the abstraction. I kind of see 3 options:

  1. Use the DLX in "some way that I don't understand yet". NServiceBus uses some rather ingenious but complex system that I'm not 100% convinced I'd like to copy - but it does prove that it can be done with DLX.

  2. Persist messages in an "inbox" of sorts and make it each endpoint's responsibility to replay their own messages in the future.

  3. This is probably a terrible idea, but we could delay calling the AMQP msg.Reject() until the message is actually due to be retried; holding the unacknowledged message in-memory. During this delay period we would bump up the QOS pre-fetch count by one, allowing additional messages to be processed. I'm not sure how well this would cope in the face of some runtime error (such as an unreachable database) when nearly every message would fail, but maybe this is as simple as setting some (fairly large) upper limit to the number of messages that can be held in this delayed state at one time - essentially a max pref-fetch. There is some overhead on both the client and RMQ server in this case.

jmalloc commented 6 years ago

I'm going to try point number 3. above in the 31-terrible-idea branch.

jmalloc commented 6 years ago

So, this turned out okay in and of itself, but without some kind of DLX component, we can't get the message redelivery count in order for the retry policy to come up with meaningful delays.

jmalloc commented 6 years ago

I'm going to call this "done" as of #82, we can address any potential issues with the "terrible idea" separately.