dwbutler / logstash-logger

Ruby logger that writes logstash events
MIT License
456 stars 118 forks source link

option to limit number of retry for kafka #73

Closed scauglog closed 8 years ago

scauglog commented 8 years ago

otherwise logger will try endlessly to submit the log and this block the app (grape with grape_logging gem).

dwbutler commented 8 years ago

Hi,

If I understand correctly, the problem you are trying to solve here is that when Kafka goes down, log messages pile up in the buffer and eventually block the application. This is a known problem affecting all devices. (See #67 and #68).

Your proposed solution here is to limit the number of retries, and once this limit is reached, throw away the log messages. I can see why this would fix your specific problem, because as a side effect the messages that were in the buffer are discarded.

The approach you're taking here has several problems. First, it only works for Kafka. Second, a max_retry option isn't really specific enough. It doesn't specify what happens when the maximum retry count is reached.

The direction I want to take LogStashLogger is to have configurable options for retention of log messages in the buffer. When the buffer is full, the configured behavior will be triggered - for example, discard the messages, or write them to an alternative fallback logger.

Please keep an eye on #68. Once this is resolved, this should address your problem.

scauglog commented 8 years ago

I'm not sure this is linked to #67. What I think append with Kafka is that when Poseidon::Errors::UnableToFetchMetadata is raise we try to reconnect and then this error is raise again, because when kafka is down poseidon raise this error. We stay in this loop forever and that cause the lock. All the method used in the process seems to be define inside LogstashLogger::Device::Kafka but maybe I'm missing something.

dwbutler commented 8 years ago

This behavior should be fixed in 0.16.0.