elastic / beats

:tropical_fish: Beats - Lightweight shippers for Elasticsearch & Logstash
https://www.elastic.co/products/beats
Other
12.14k stars 4.91k forks source link

[filebeat] add_kubernetes_metadata processor stopped working since v7.16 (under a specific condition) #31171

Open gpothier opened 2 years ago

gpothier commented 2 years ago

The kubernetes metadata fields are not added anymore since v7.16 if a field named kubernetes.cluster.name (or any field that starts with kubernetes, I suppose) is statically added to all events with fields and fields_under_root in the config. It is presumably caused by this PR: https://github.com/elastic/beats/pull/27689.

I understand the rationale of the PR, but I think there is room for improvement:

Here is the filebeat.yml config, just in case:

      filebeat.inputs:
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

      output.elasticsearch:
        protocol: https

      fields_under_root: true
      fields:
        kubernetes.cluster.name: "${KUBERNETES_CLUSTER_NAME}"
        cloud.provider: "o3"
        cloud.availability_zone: "o3"

      processors:
        - add_host_metadata:

      cloud.id: "${ELASTIC_CLOUD_ID}"
      cloud.auth: "${ELASTIC_CLOUD_AUTH}"
ryan-dyer-sp commented 2 years ago

Please fix this. We updated to 8.2.2 from 7.9.1 only to find all of our kubernetes metadata stopped working. After enabling debug and not seeing anything wrong, going through release notes breaking changes and not finding anything, I finally decided to check issues and here we are.

This is a breaking change which should have been mentioned as such in the release notes. Not just as a bug fix. What bug is this fixing? Its not mentioned in the PR. This behavior does not appear to be documented anywhere on this page: https://www.elastic.co/guide/en/beats/filebeat/current/add-kubernetes-metadata.html

ryan-dyer-sp commented 2 years ago

For those that also stumble across this issue. Workaround: remove the kubernetes.* fields from the field object and add an add_fields to your processors.

      - add_fields:
          # We use the add_fields processor instead of the fields object as add_kubernetes_metadata does not work if it finds any existing kubernetes.* fields on the event.
          # https://github.com/elastic/beats/issues/31171
          target: kubernetes
          fields:
            cluster: <cluster> 

IDK if you can put sub fields(cluster.name) this way or not

botelastic[bot] commented 1 year ago

Hi! We just realized that we haven't looked into this issue in a while. We're sorry!

We're labeling this issue as Stale to make it hit our filters and make sure we get back to it as soon as possible. In the meantime, it'd be extremely helpful if you could take a look at it as well and confirm its relevance. A simple comment with a nice emoji will be enough :+1. Thank you for your contribution!

m-standfuss commented 1 year ago

We just spent hours looking for logs to troubleshoot an issue only to realized that our search parameters were including a k8s field that was no longer being populated after our upgrade to 8.8. This is a bad one for us.

100% agree with @ryan-dyer-sp 's sentiment

This is a breaking change which should have been mentioned as such in the release notes. Not just as a bug fix. What bug is this fixing? Its not mentioned in the PR. This behavior does not appear to be documented anywhere on this page: elastic.co/guide/en/beats/filebeat/current/add-kubernetes-metadata.html

qaiserali commented 2 months ago

I'm experiencing the same issue with filebeat version 8.14.1. Any idea how to resolve it and add k8s metadata using 'add_kubernetes_metadata'? According to the documentation available at https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-configuration.html#k8s-beat-role-based-access-control-for-beats, it should work.