Open smparekh opened 5 months ago
the sha 256 digest we are having issue with: 59886dc179d52a43dfdf061c764e9856dafc67c41dd78e9d868872000d9e660a
reverting to this sha: f0c0d41aba562c5f4ce13f2b00ae50c381925063cfcc7ec7a9f2a4f622ee9535
doesn't throw invalid pointer
I have the same issue in fluent/fluentd-kubernetes-daemonset:v1-debian-cloudwatch. I revert to this sha: b7185b3483d2ca5c3e923e33641dd3814865321b34da05c46eda96576da905a0 doesn't throw this error too. v1-debian-cloudwatch.log
Also seeing this in fluent/fluentd-kubernetes-daemonset:v1.16.5-debian-forward-1.0 image
logging fails
2024-04-03 20:27:34 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/node-problem-detector-kwwk8_kube-system_node-problem-detector-4e2796e4c3ca14953fda355aca52c0200a0f53b7b0596d7e94ec89169c782f8a.log
2024-04-03 20:27:34 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/unbound-exporter-llm48_unbound_unbound-exporter-bd636614623be73dc03069f9a0fefffb779c47d2c034e796d3364fb49fb2e6fe.log
2024-04-03 20:27:34 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/unbound-exporter-llm48_unbound_unbound-exporter-init-1b88c92fa871c07c66d558a84a656879a1b13dfa12c6b533b37ec9ae74fc555f.log
2024-04-03 20:27:34 +0000 [info]: #0 fluentd worker is now running worker=0
free(): invalid pointer
2024-04-03 20:27:37 +0000 [error]: Worker 0 exited unexpectedly with signal SIGABRT
2024-04-03 20:27:37 +0000 [info]: #0 init worker0 logger path=nil rotate_age=nil rotate_size=nil
This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days
Describe the bug
Using the latest
v1-debian-forward-arm64
image results in the container throwingfree(): invalid pointer
and constantly restarting leading to a node evictionTo Reproduce
I have provided a redacted config to reproduce
Expected behavior
Worker should comeup and stay up
Your Environment
Your Configuration
Additional context
we have a daemonset in a cluster running from about 22d ago where we are not seeing the invalid pointer issue