Closed MulesoftDevelop closed 2 years ago
Hi Team,
we have tried to set different timeout parameters. This has improved the situation but the issues is not solved. The screenshot the behavior as well as the used settings:
During the test we processed about 2k messages which are producing 4k log entries per minute.
BR, Sebastian
Hi @MulesoftDevelop,
Thank you for raising this issue.
For Appender settings, please check this link for reference.
Property of batch_size_count
with recommended value is mentioned on above link (for production).
Also, in terms of changing timeout settings, we would recommend to update logging library version to 1.11.5 which includes improvements related to timeout and error call backs.
Please check with this updated version and let us know if you still face any issue.
Thank you.
Hi @MulesoftDevelop, Based on your confirmation to internal team, issue has been resolved.
Hence, closing this issue.
Hi Team,
we are using the SPLUNK appender in Mulesoft with these settings:
Unfortunately we had some downtimes of SPLUNK. During these downtimes the memory consumption increased until the applications crashed. We have also tried different settings for batch_size_count and batch_interval but this didn't help.
Is there any way of configuring the appender to drop the logs if it is not possible to deliver then to SPLUNK? The goal is to keep the application stable even if logging is not working.
Best Regards, Sebastian Liepe