Open vaibhavgupta3007 opened 2 years ago
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Buffer_Size 0
Kube_Meta_Cache_TTL 300s
K8S-Logging.Parser On
"flb_log_cw":"true",
"output.conf":"
[OUTPUT]
Name cloudwatch_logs
Match *
region us-west-2
log_group_name eks/fluent-bit-cloudwatch
log_stream_prefix from-fluent-bit-
auto_create_group true
log_key log
Hmm. Actually, the config you show can't be what was actually applied @vaibhavgupta3007 Because of the log output:
{ "log": "time="2022-02-15T23:52:53Z" level=info msg="[cloudwatch 0] plugin parameter auto_create_group = 'true'"" } { "log": "[2022/02/15 23:53:18] [ info] [output:cloudwatch_logs:cloudwatch_logs.0] Created log stream from-fluent-bit-kube.var.log.containers.service-quoting-apis-7969d86747-r5hcs_dev6_service-quoting-apis-4ad92cd417d9f123057ecf3d9977c2cf0ba6c5eb7e365eaf29655f622db724bc.log" }
Both cloudwatch
and cloudwatch_logs
plugins were enabled. So I think this is not actually the config applied to this pod.
@PettitWesley : can you pls elaborate more, like what do you mean by config not applied to pod. after updating configmap, i restarted my pod. So it should take latest configmap configuration and it should use kubernetes filter.
@vaibhavgupta3007 TBH I'm not sure how this happened but given that I see both cloudwatch plugins enabled and configured... the config map you shared must not be the one that actually ran for this run. I'm not sure I can help more on that, you may need to contact AWS support.
As far as the lack of k8s metadata goes, AFAIK this is not a bug in Fluent Bit but is a a bug in EKS Fargate that the team is aware of. Please track that container roadmap issue. I apologize that can't be perfectly helpful on this at this time.
Is this still a confirmed bug and if so is there no workaround for it?
I am using the following config, taken basically line-by-line from the docs (https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html) and am not seeing any k8s metadata being added.
(additionally the log_retention_days config seems to be ignored, but that's a sidetrack)
kind: ConfigMap
apiVersion: v1
data:
filters.conf: |
[FILTER]
Name parser
Match *
Key_name log
Parser crio
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Buffer_Size 0
Kube_Meta_Cache_TTL 300s
flb_log_cw: "true"
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match *
region eu-north-1
log_group_name my-logs
log_stream_prefix from-fluent-bit-
log_retention_days 60
auto_create_group true
log_key log
parsers.conf: |
[PARSER]
Name crio
Format Regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>P|F) (?<log>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
@kncesarini The bug has been fixed, to enrich logs with k8s-metadata log_key
must be removed.
@kncesarini log_key explanation is here: https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/mainline/examples/fluent-bit/cloudwatchlogs
@rsumukha @PettitWesley Awesome, thanks for the quick support, works great now. I'm sure it would be super useful to others working with this to have a complete example in the docs I linked above. For reference this is my current config now which works great (removing log_key from output, and adding Keep_Log Off to the filter):
apiVersion: v1
data:
filters.conf: |
[FILTER]
Name parser
Match *
Key_name log
Parser crio
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
Buffer_Size 0
Kube_Meta_Cache_TTL 300s
flb_log_cw: 'true'
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match *
region eu-north-1
log_group_name my-logs
log_stream_prefix from-fluent-bit-
log_retention_days 60
auto_create_group true
parsers.conf: |
[PARSER]
Name crio
Format Regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>P|F) (?<log>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
kind: ConfigMap
We have 100% single-line json logs.
@kncesarini This cm is work for me, but kubernetes info is missed log like this
{
"log": "2022-10-31T05:22:51.368004337Z stderr F 2022/10/31 05:22:51 [notice] 1#1: signal 3 (SIGQUIT) received, shutting down"
}
but I need log like
"log": "I1031 07:56:32.872420 1 static_autoscaler.go:502] Scale down status: unneededOnly=false lastScaleUpTime=2022-10-10 17:42:14.50727249 +0000 UTC m=+2.131787240 lastScaleDownDeleteTime=2022-10-10 17:42:14.50727255 +0000 UTC m=+2.131787310 lastScaleDownFailTime=2022-10-10 17:42:14.50727263 +0000 UTC m=+2.131787380 scaleDownForbidden=false isDeleteInProgress=false scaleDownInCooldown=false\n",
"stream": "stderr",
"kubernetes": {
"pod_name": "blueprints-addon-cluster-autoscaler-aws-cluster-autoscaler8t8xn",
"namespace_name": "kube-system",
"pod_id": "8a1909ae-9cc5-409e-99b7-9c0995ef3b98",
"host": "ip-10-2-10-107.us-west-2.compute.internal",
"container_name": "aws-cluster-autoscaler",
"docker_id": "a4f0283fa8f7c7ba670cba498c4b5b8cf3a9e497cba6171a111444aa25b08749",
"container_hash": "k8s.gcr.io/autoscaling/cluster-autoscaler@sha256:9494f34a5dcf7202bc08a33a617062cd29b4b57a6914a89bdde2c6a219b0b942",
"container_image": "k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.1"
}
}
Is there any example to get k8s namespace in fluent-bit log? or do eks fargate fluent-bit support this metrics?
### Fluent Bit Version Info please refer log for version. ### Cluster Details EKS- Fargate ### Application Details
Steps to reproduce issue
Related Issues