Closed vascoosx closed 1 year ago
Is this fluentd core bug? The logs are lost inside fluentd or 3rd party plugin?
We can't setup gcp or other cloud service. Could you reproduce the issue on simple environment, e.g. one linux server?
Thank you. I'll try reproducing it with a simpler environment. Meanwhile may you tell me if there are any spec on the maximum throughput for http source? Seems like wherever the issue stems from, it is a load related issue.
Meanwhile may you tell me if there are any spec on the maximum throughput for http source?
I'm not sure because it depends on machin spec, format and more... Official article mentions one example: https://docs.fluentd.org/input/http#handle-large-data-with-batch-mode
We have a similar problem when using the splunk_hec plugin to forward messages to an external splunk installation via the splunk heavy forwarder.
We have noticed that when the problem manifests, we see this error in the fluent log:
2019-08-20 14:05:44 +0000 [info]: Worker 0 finished unexpectedly with signal SIGKILL
If the worker is killed, I suspect all the of messages that were in the queue are lost. Is this a correct assumption? We're not currently configured to handle overflow conditions (by backing to a file for example). We lost three days worth of messages that had yet to be funneled over to splunk when this happened.
Looking for clarification to help determine if it's fluentd, or the plugin that is problematic.
Sorry for the delay.
@vguaglione
If the worker is killed, I suspect all the of messages that were in the queue are lost. Is this a correct assumption?
We can use file buffer. Log loss due to forced process killed cannot be completely prevented, but it can be minimized.
@vascoosx I will close this issue as there will be no update for a while.
If you are still experiencing this problem and know anything about how to reproduce it, please re-open.
Describe the bug
In my current setup logs go to papertrail by syslog+tls and to a gcp instance by https which then goes to stackdriver. Now, some logs that are present in papertrail can't be found in stackdriver
To Reproduce
Logs are initially sent through heroku's logdrain. The logs first go to an nginx server working as a proxy then to fluentd which sends it to stackdriver.
Expected behavior
Every log that is in papertrail should be in stackdriver
Your Configuration
client setting:
Your Error Log
(no errors were found in nginx)
Additional context
agent version: google-fluentd 1.4.2 OS: Ubuntu 18.04
The text below is a portion of the logs. Asterix denote the logs that were missing