Open kncesarini opened 1 year ago
I'm facing this issue in my cluster as well. I'm using Kubernetes v1.21.
Changing the log_group_name
works, but as soon as I add log_group_template
or log_stream_template
to my config, I stop getting logs.
I need this feature too because the default generated log stream is way too long If a pod has several containers, it will be very hard to know which log stream is for what container unless open the stream detail.
I want the C plugin (cloudwatch_logs
) in stead of the GoLang (cloudwatch
) one to achieve a high throughput. However, if seems to me that EKS platform eks.2 with kubernets 1.24 still uses an old version of the C plugin (version 1.5) instead of the current version which support log_stream_template
https://github.com/fluent/fluent-bit/blob/df73b2200969006a36e116279312e1142612801d/plugins/out_cloudwatch_logs/cloudwatch_logs.c#L130
Is there any way for us to specify the plugin version via the config map aws-logging
?
Any update on this issue?
Using log_group_template
doesn't seem to have any effect for me with logs just defaulting to the log_group_name
using Kubernetes 1.24
it is very hard to track all logs in a single group when running lots of pods..
Any updates here? It seems like the EKS on Fargate built-in log router runs Fluent Bit with the version below 1.4.0 because it doesn't support templating in log_stream_name, e.g. fargate-$(tag). It's a shame, really(
Community Note
Tell us about your request This is my current (and working) EKS fargate logging CM:
Following the docs at https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch I tried adding the the following configuration to the output in order to separate the logs by namespace:
However when I do, no logs appear anymore in CloudWatch (adding either of these configs on their own causes the issue). The "default" log group no longer gets any logs, and no new log group is created. Additionally I don't get any logs from the flb agent. The describe pod command however says they are enabled after restarting a deployment:
Which service(s) is this request for? EKS Fargate Logging
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard? See above
Are you currently working around this issue? Currently forced to send all logs in the cluster to the same log group, and with less readable log stream names.
Additional context Kubernetes v1.23
Attachments n/a