tyro / rabbit-amazon-bridge

A service that routes JSON messages back and forth between AWS and rabbitmq
Apache License 2.0
18 stars 6 forks source link

Improve dead-letter handling #33

Closed maccac closed 5 years ago

maccac commented 5 years ago

It would make sense to add alternatives to just deadlettering every message if Amazon is unavailable.

For this I would propose:

maccac commented 5 years ago

A particular annoyance with how spring has done the retry policies is that it does allow for control of the Exception types that trigger Retries.

If we wish to provide finer gain control we would need to add the following bean:

   @Bean(name = arrayOf("rabbitListenerContainerFactory"))
    fun simpleRabbitListenerContainerFactory(
            configurer: SimpleRabbitListenerContainerFactoryConfigurer,
            connectionFactory: ConnectionFactory): SimpleRabbitListenerContainerFactory {
        val factory = SimpleRabbitListenerContainerFactory()
        configurer.configure(factory, connectionFactory)

        val retryConfig = properties.listener.simple.retry
        if (retryConfig.isEnabled) {
            val builder = RetryInterceptorBuilder.stateless()
            val simpleRetryPolicy = SimpleRetryPolicy(retryConfig.maxAttempts, mapOf<Class<out Throwable>, Boolean>(
                    AmqpRejectAndDontRequeueException::class.java to false,
                    SdkBaseException::class.java to true
            ), true)
            builder.retryPolicy(simpleRetryPolicy)
                   .backOffOptions(retryConfig.initialInterval.toMillis(), retryConfig.multiplier, retryConfig.maxInterval.toMillis())
            val recoverer = RejectAndDontRequeueRecoverer()
            builder.recoverer(recoverer)
            factory.setAdviceChain(builder.build())
        }

        return factory
    }

Another approach may be to not bother with these retries and throw any SdkBaseException in a way that causes spring to nack the message to rabbit. This will just cause them to keep being replayed. The only annoying thing is that this will create a lot of logging spam, but maybe we can just tackle that by adjusting the logging settings.

rajeshgpai commented 5 years ago

@maccac I'd prefer not customising in this app. The applications which use this can always extend as per their needs. I'll create a pull request with the work that we did on friday to include retries if that is fine with you

rajeshgpai commented 5 years ago

@maccac Just noticed that we had completed a local commit on the machine that we were pairing on last friday. I'll just push that once you confirm.

mc-tyro commented 5 years ago

From the code we had does the following hold true?

Whether we support customisable delays / exponential backoffs when these errors are happening can be treated as a seperate issue. I would like to ask a question on spring to see why the retry policy isn't configurable beyond the max attempts.

The only thing to be potentially cautious around would be the amount or errors being logged cause cause disk or i/O issues.

mc-tyro commented 5 years ago

We noticed that certain SQS errors are wrapped in a spring messaging exception. We are re-opening so as to handle this properly.