SumoLogic / sumologic-kubernetes-collection

Sumo Logic collection solution for Kubernetes
Apache License 2.0
146 stars 183 forks source link

Unable to enrich containers logs with kubernetes metadata #1905

Closed binc75 closed 8 months ago

binc75 commented 2 years ago

Environment:

Description: I'm trying to enrich containers logs sent to sumologic with kubernetes metadata information. I see the containers logs but no additional information about kubernetes metadata.

Here an example of what I get:

{
  "timestamp": 1637844599729,
  "log": {
    "log": "2021-11-25 13:49:59.729 INFO  (qtp1924227192-24) [   ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :clusterstatus with params action=CLUSTERSTATUS&wt=json and sendToOCPQueue=true\n",
    "stream": "stdout",
    "time": "2021-11-25T12:49:59.729328394Z"
  }
}

My values.yaml it's pretty simple, I only care about logs and not metrics and so on:

sumologic:
  ## If enabled, a pre-install hook will create Collector and Sources in Sumo Logic
  setupEnabled: false

  ## If you set it to false, it would set EXCLUDE_NAMESPACE=<release-namespace>
  ## and not add the fluentD/fluent-bit logs and Prometheus remotestorage metrics.
  collectionMonitoring: false

  ### Metrics configuration
  ## Set the enabled flag to false for disabling metrics ingestion altogether.
  metrics:
    enabled: false

  ### Traces configuration
  ## This is experimental feature and may be unavailable for your account
  traces:
    enabled: false

fluentd:
  persistence:
    enabled: true #used to be false

  logs:
    statefulset:
      resources:
        limits:
          memory: 1Gi
          cpu: !!null
        requests:
          memory: 1Gi
          cpu: 500m
    autoscaling:
      targetCPUUtilizationPercentage: 80
    containers:
      multiline:
        enabled: false
      excludeContainerRegex: "(istio-proxy|setconntrack|istio-init)"
      excludeNamespaceRegex: "(kube-system|tracing|istio-system|logging)"
      excludePodRegex: "(platform-kafka-replicator.*|search-solr-monitor-.*|di-airflow-scheduler-.*|open-policy-agent-opa.*|jaeger-collector.*|prometheus-((postgres|node)-exporter|adapter).*|.*-kb.*|grafana.*|ric-webhooks.*|search-tool-query-comparison.*|.*loadtest.*)"
    kubelet:
      enabled: false
    systemd:
      excludeUnitRegex: "(docker.service|kubelet.*|node-problem-detector.service|k8s_kubelet.*|containerd.service)"
    default:
      excludeUnitRegex: "(docker.service|kubelet.*|node-problem-detector.service|k8s_kubelet.*containerd.service)"
  metrics:
    enabled: false
  events:
    enabled: false

fluent-bit:
  config:
    inputs: |
      [INPUT]
          Name                tail
          Path                /var/log/containers/*.log
          Parser              containerd
          Tag                 containers.*
          Refresh_Interval    1
          Rotate_Wait         60
          Mem_Buf_Limit       5MB
          Skip_Long_Lines     On
          DB                  /tail-db/tail-containers-state-sumo.db
          DB.Sync             Normal
      [INPUT]
          Name            systemd
          Tag             host.*
          DB              /tail-db/systemd-state-sumo.db
          Systemd_Filter  _SYSTEMD_UNIT=addon-config.service
          Systemd_Filter  _SYSTEMD_UNIT=addon-run.service
          Systemd_Filter  _SYSTEMD_UNIT=cfn-etcd-environment.service
          Systemd_Filter  _SYSTEMD_UNIT=cfn-signal.service
          Systemd_Filter  _SYSTEMD_UNIT=clean-ca-certificates.service
          Systemd_Filter  _SYSTEMD_UNIT=containerd.service
          Systemd_Filter  _SYSTEMD_UNIT=coreos-metadata.service
          Systemd_Filter  _SYSTEMD_UNIT=coreos-setup-environment.service
          Systemd_Filter  _SYSTEMD_UNIT=coreos-tmpfiles.service
          Systemd_Filter  _SYSTEMD_UNIT=dbus.service
          Systemd_Filter  _SYSTEMD_UNIT=docker.service
          Systemd_Filter  _SYSTEMD_UNIT=efs.service
          Systemd_Filter  _SYSTEMD_UNIT=etcd-member.service
          Systemd_Filter  _SYSTEMD_UNIT=etcd.service
          Systemd_Filter  _SYSTEMD_UNIT=etcd2.service
          Systemd_Filter  _SYSTEMD_UNIT=etcd3.service
          Systemd_Filter  _SYSTEMD_UNIT=etcdadm-check.service
          Systemd_Filter  _SYSTEMD_UNIT=etcdadm-reconfigure.service
          Systemd_Filter  _SYSTEMD_UNIT=etcdadm-save.service
          Systemd_Filter  _SYSTEMD_UNIT=etcdadm-update-status.service
          Systemd_Filter  _SYSTEMD_UNIT=flanneld.service
          Systemd_Filter  _SYSTEMD_UNIT=format-etcd2-volume.service
          Systemd_Filter  _SYSTEMD_UNIT=kube-node-taint-and-uncordon.service
          Systemd_Filter  _SYSTEMD_UNIT=kubelet.service
          Systemd_Filter  _SYSTEMD_UNIT=ldconfig.service
          Systemd_Filter  _SYSTEMD_UNIT=locksmithd.service
          Systemd_Filter  _SYSTEMD_UNIT=logrotate.service
          Systemd_Filter  _SYSTEMD_UNIT=lvm2-monitor.service
          Systemd_Filter  _SYSTEMD_UNIT=mdmon.service
          Systemd_Filter  _SYSTEMD_UNIT=nfs-idmapd.service
          Systemd_Filter  _SYSTEMD_UNIT=nfs-mountd.service
          Systemd_Filter  _SYSTEMD_UNIT=nfs-server.service
          Systemd_Filter  _SYSTEMD_UNIT=nfs-utils.service
          Systemd_Filter  _SYSTEMD_UNIT=node-problem-detector.service
          Systemd_Filter  _SYSTEMD_UNIT=ntp.service
          Systemd_Filter  _SYSTEMD_UNIT=oem-cloudinit.service
          Systemd_Filter  _SYSTEMD_UNIT=rkt-gc.service
          Systemd_Filter  _SYSTEMD_UNIT=rkt-metadata.service
          Systemd_Filter  _SYSTEMD_UNIT=rpc-idmapd.service
          Systemd_Filter  _SYSTEMD_UNIT=rpc-mountd.service
          Systemd_Filter  _SYSTEMD_UNIT=rpc-statd.service
          Systemd_Filter  _SYSTEMD_UNIT=rpcbind.service
          Systemd_Filter  _SYSTEMD_UNIT=set-aws-environment.service
          Systemd_Filter  _SYSTEMD_UNIT=system-cloudinit.service
          Systemd_Filter  _SYSTEMD_UNIT=systemd-timesyncd.service
          Systemd_Filter  _SYSTEMD_UNIT=update-ca-certificates.service
          Systemd_Filter  _SYSTEMD_UNIT=user-cloudinit.service
          Systemd_Filter  _SYSTEMD_UNIT=var-lib-etcd2.service
          Max_Entries     1000
          Read_From_Tail  true

  tolerations:
    - key: node-data
      operator: Exists
    - key: node-solr-pri
      operator: Exists
    - key: node-solr-sec
      operator: Exists
    - key: node-solr-admin
      operator: Exists
    - key: CriticalAddonsOnly
      operator: Exists
    - key: node-istio
      operator: Exists
    - key: node-platform
      operator: Exists
    - key: node-solr-myricardo-pri
      operator: Exists
    - key: node-solr-myricardo-sec
      operator: Exists
    - key: node-backup
      operator: Exists
    - key: node-prometheus-ha
      operator: Exists

kube-prometheus-stack:
  enabled: false

The sumologic-sumologic-fluentd-logs configmap looks like this

apiVersion: v1
data:
  buffer.output.conf: |
    compress "gzip"
    flush_interval "5s"
    flush_thread_count "8"
    chunk_limit_size "1m"
    total_limit_size "128m"
    queued_chunks_limit_size "128"
    overflow_action drop_oldest_chunk
    retry_max_interval "10m"
    retry_forever "true"
  common.conf: |-
    # Prevent fluentd from handling records containing its own logs and health checks.
    <match fluentd.pod.healthcheck>
      @type relabel
      @label @FLUENT_LOG
    </match>
    <label @FLUENT_LOG>
      <match **>
        @type null
      </match>
    </label>
    # expose the Fluentd metrics to Prometheus
    <source>
      @type prometheus
      metrics_path /metrics
      port 24231
    </source>
    <source>
      @type prometheus_output_monitor
    </source>
    <source>
      @type http
      port 9880
      bind 0.0.0.0
    </source>
    <system>
      log_level info
    </system>
  fluent.conf: |-
    @include common.conf
    @include logs.conf
  logs.conf: "<source>\n  @type forward\n  port 24321\n  bind 0.0.0.0\n  \n</source>\n@include
    logs.source.containers.conf\n@include logs.source.systemd.conf\n@include logs.source.default.conf\n"
  logs.enhance.k8s.metadata.filter.conf: |-
    cache_size  "10000"
    cache_ttl  "7200"
    cache_refresh "3600"
    cache_refresh_variation "900"
    in_namespace_path '$.kubernetes.namespace_name'
    in_pod_path '$.kubernetes.pod_name'
    core_api_versions v1
    api_groups apps/v1,extensions/v1beta1
    data_type logs
  logs.kubernetes.metadata.filter.conf: |-
    annotation_match ["sumologic\.com.*"]
    de_dot false
    watch "true"
    ca_file ""
    verify_ssl "true"
    client_cert ""
    client_key ""
    bearer_token_file ""
    cache_size "10000"
    cache_ttl "7200"
    tag_to_kubernetes_name_regexp '.+?\.containers\.(?<pod_name>[^_]+)_(?<namespace>[^_]+)_(?<container_name>.+)-(?<docker_id>[a-z0-9]{64})\.log$'
  logs.kubernetes.sumologic.filter.conf: "source_name \"%{namespace}.%{pod}.%{container}\"\nsource_host
    \nlog_format \"fields\"\nsource_category \"%{namespace}/%{pod_name}\"\nsource_category_prefix
    \"kubernetes/\"\nsource_category_replace_dash \"/\"\nexclude_pod_regex \"(platform-kafka-replicator.*|search-solr-monitor-.*|di-airflow-scheduler-.*|open-policy-agent-opa.*|jaeger-collector.*|prometheus-((postgres|node)-exporter|adapter).*|.*-kb.*|grafana.*|ric-webhooks.*|search-tool-query-comparison.*|.*loadtest.*)\"\nexclude_container_regex
    \"(istio-proxy|setconntrack|istio-init)\"\nexclude_host_regex \"\"\nper_container_annotations_enabled
    false\nper_container_annotation_prefixes \n"
  logs.output.conf: |
    data_type logs
    log_key log
    endpoint "#{ENV['SUMO_ENDPOINT_DEFAULT_LOGS_SOURCE']}"
    verify_ssl "true"
    log_format "fields"
    add_timestamp "true"
    timestamp_key "timestamp"
    proxy_uri ""
    compress "true"
    compress_encoding "gzip"
  logs.source.containers.conf: "\n\n# match all  container logs and label them @NORMAL\n<match
    containers.**>\n  @type relabel\n  @label @NORMAL\n</match>\n<label @NORMAL>\n\n
    \ # only match fluentd logs based on fluentd container log file name.\n  # by
    default, this is <filter **collection-sumologic-fluentd**>\n  <filter **sumologic-sumologic-fluentd**>\n
    \   # only ingest fluentd logs of levels: {error, fatal} and warning messages
    if buffer is full\n    @type grep\n    <regexp>\n      key log\n      pattern
    /\\[error\\]|\\[fatal\\]|drop_oldest_chunk|retry succeeded/\n    </regexp>\n  </filter>\n\n\n
    <filter **sumologic-sumologic-otelcol**>\n   @type grep\n   <regexp>\n     key
    log\n     # Select only known error/warning/fatal/panic levels or logs coming
    from one of the source known to provide useful data\n     pattern /\\\"level\\\":\\\"(error|warning|fatal|panic|dpanic)\\\"|\\\"caller\\\":\\\"(builder|service|kube|static)/\n
    \  </regexp>\n </filter>\n\n  # third-party Kubernetes metadata  filter plugin\n
    \ <filter containers.**>\n    @type kubernetes_metadata\n    @log_level error\n
    \   @include logs.kubernetes.metadata.filter.conf\n  </filter>\n  # Sumo Logic
    Kubernetes metadata enrichment filter plugin\n  <filter containers.**>\n    @type
    enhance_k8s_metadata\n    @log_level error\n    @include logs.enhance.k8s.metadata.filter.conf\n
    \ </filter>\n  \n  # Kubernetes Sumo Logic filter plugin\n  <filter containers.**>\n
    \   @type kubernetes_sumologic\n    @include logs.kubernetes.sumologic.filter.conf\n
    \   \n    exclude_namespace_regex \"logging|(kube-system|tracing|istio-system|logging)\"\n
    \ </filter>\n  \n  \n  <match containers.**>\n    @type copy\n    <store>\n      @type
    sumologic\n      @id sumologic.endpoint.logs\n      sumo_client \"k8s_2.1.7\"\n
    \     @log_level error\n      @include logs.output.conf\n      <buffer>\n        @type
    file\n        path /fluentd/buffer/logs.containers\n        @include buffer.output.conf\n
    \     </buffer>\n    </store>\n  </match>\n</label>\n"
  logs.source.default.conf: "\n<filter **>\n  @type grep\n  <exclude>\n    key message\n
    \   pattern /disable filter chain optimization/\n  </exclude>\n</filter>\n  \n<filter
    **>\n  @type kubernetes_sumologic\n  source_name \"k8s_default\"\n  source_category
    \"default\"\n  source_category_prefix \"kubernetes/\"\n  source_category_replace_dash
    \"/\"\n  exclude_facility_regex \"\"\n  exclude_host_regex \"\"\n  exclude_priority_regex
    \"\"\n  exclude_unit_regex \"(docker.service|kubelet.*|node-problem-detector.service|k8s_kubelet.*containerd.service)\"\n</filter>\n
    \ \n<match **>\n  @type copy\n  <store>\n    @type sumologic\n    @id sumologic.endpoint.logs.default\n
    \   sumo_client \"k8s_2.1.7\"\n    @include logs.output.conf\n    <buffer>\n      @type
    file\n      path /fluentd/buffer/logs.default\n      @include buffer.output.conf\n
    \   </buffer>\n  </store>\n</match>\n"
  logs.source.systemd.conf: "\n<match host.kubelet.**>\n  @type null\n</match>\n\n<match
    host.**>\n  @type relabel\n  @label @SYSTEMD\n</match>\n<label @SYSTEMD>\n  \n
    \ <filter host.**>\n    @type kubernetes_sumologic\n    source_name \"k8s_systemd\"\n
    \   source_category \"system\"\n    source_category_prefix \"kubernetes/\"\n    source_category_replace_dash
    \"/\"\n    exclude_facility_regex \"\"\n    exclude_host_regex \"\"\n    exclude_priority_regex
    \"\"\n    exclude_unit_regex \"(docker.service|kubelet.*|node-problem-detector.service|k8s_kubelet.*|containerd.service)\"\n
    \ </filter>\n  <filter host.**>\n    @type record_modifier\n    <record>\n      _sumo_metadata
    ${record[\"_sumo_metadata\"][:source] = tag_parts[1]; record[\"_sumo_metadata\"]}\n
    \   </record>\n  </filter>\n  \n  \n  <match **>\n    @type copy\n    <store>\n
    \     @type sumologic\n      @id sumologic.endpoint.logs.systemd\n      sumo_client
    \"k8s_2.1.7\"\n      @include logs.output.conf\n      <buffer>\n        @type
    file\n        path /fluentd/buffer/logs.systemd\n        @include buffer.output.conf\n
    \     </buffer>\n    </store>\n  </match>\n</label>\n"
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: sumologic
    meta.helm.sh/release-namespace: logging
  creationTimestamp: "2021-11-26T10:22:59Z"
  labels:
    app: sumologic-sumologic-fluentd-logs
    app.kubernetes.io/managed-by: Helm
    chart: sumologic-2.1.7
    heritage: Helm
    release: sumologic
  name: sumologic-sumologic-fluentd-logs
  namespace: logging
  resourceVersion: "2991187415"
  uid: 467d2cc6-c6ac-4b52-8482-75d96ed14087

I'm a little bit lost because I thought this would have worked out of the box but I clearly miss something.

Note: I was able to get the k8s metadata using fluent-bit changing the tail INPUT Tag to kube.* to match the fluent-bit filter, but I suppose this is not the right way since this screwed up the filtering in fluentd.

Thank you!

perk-sumo commented 2 years ago

Hey @binc75,

as you correctly noted changing INPUT tag from containers.* to kube.* will result in a broken collection, more info can be found in the explanation for the issue #1563

Is there anything interesting in the Fluentd logs?

I don't see anything suspicious in your config, the metadata enrichment should work - let us take a look and get back to you.

binc75 commented 2 years ago

Hi @perk-sumo , I didn't notice any unusual fluentd log, on that side everything looks good. Thank you for having a look.

Cheers Nicola.

andrzej-stencel commented 2 years ago

@binc75 from the cursory look at your logs, you're using incorrect parser. It seems your logs are in Docker format, but you're using containerd parser in your Fluent Bit config.

binc75 commented 2 years ago

@astencel-sumo actually the logs are parsed and send to SUMO, just without the k8s enrichment. If I well remember with the Docker parser no logs where sent. I will anyway give another try with the Docker parser and let you know

binc75 commented 2 years ago

@astencel-sumo I tried the Docker filter, but at the end of the day the situation remain pretty much the same: no k8s enrichment.

Screenshot from 2021-12-02 07-24-50

perk-sumo commented 2 years ago

Hey @binc75 could you check one thing - are the fields created in your account? Could you check some of them like statefulset, daemonset or namespace?

You can do it like that: image

binc75 commented 2 years ago

Hi @perk-sumo, I've namespace but no statefulset, daemonset Screenshot from 2021-12-16 13-32-45

perk-sumo commented 2 years ago

Ok, so now it all makes sense. The metadata is being added and sent to Sumo but because there are no fields they cannot be used for search.

That's because in the values.yaml file I can see the following configuration key:

sumologic:
  ## If enabled, a pre-install hook will create Collector and Sources in Sumo Logic
  setupEnabled: false

When the setup job is enabled it's responsible for all the configuration on the Sumo collection side. Among others it adds default k8s fields so that data can be searched by its metadata (like statefulset or daemonset) and dashboards are populated correctly.

Can the setup be enabled? It's idempotent, can be run multiple times with same effect so there is no need to keep it disabled - unless some custom changes on the Sumo collection side are needed.

Alternatively all the fields can be added by hand in the UI, they should be available in the Dropped Fields list: image

ankitm123 commented 2 years ago

I am also seeing similar issues with our eks setup (no enrichment) EKS: 1.21 sumologic: 2.6.0

I have setupEnabled as true. However, I dont see statefulset and daemonset in the fields. I also added all the fields in the dropped fields list by hand.

I followed this to change the config to scrape from /var/log/pods: https://github.com/SumoLogic/sumologic-kubernetes-collection/blob/main/deploy/docs/Best_Practices.md#collecting-logs-from-varlogpods.

This is my config (everything else is default):

fluentd:
  logs:
    containers:
      excludePodRegex: "(oauth2-proxy.*|nexus.*|lighthouse.*|jx-preview-gc.*|)"
      excludeNamespaceRegex: "(sumologic|kuberhealthy|kube-public|default|jx-git-operator|kube-system|nginx|secret-infra|jx-vault)"
      k8sMetadataFilter:
        ## uses docker_id as alias for uid as it's being used in plugin's code directly
        tagToMetadataRegexp: .+?\.pods\.(?<namespace>[^_]+)_(?<pod_name>[^_]+)_(?<docker_id>(?<uid>[a-f0-9\-]{36}))\.(?<container_name>[^\._]+)\.(?<run_id>\d+)\.log$
  metrics:
    extraFilterPluginConf: |-
      <filter **>
        @type grep
          <exclude>
            key namespace
            pattern /(^sumologic$|^jx-git-operator$|^tekton-pipelines$|^kuberhealthy$|^kube-system$|^nginx$)/
          </exclude>
          <exclude>
            key pod
            pattern /(^oauth2-proxy|^nexus|^lighthouse-gc|^jx-preview-gc)/
          </exclude>
      </filter>
fluent-bit:
  config:
    inputs: |
      [INPUT]
          Name                tail
          Path                /var/log/pods/*/*/*.log
          Docker_Mode         On
          Docker_Mode_Parser  multi_line
          Tag                 containers.*
          Refresh_Interval    1
          Rotate_Wait         60
          Mem_Buf_Limit       5MB
          Skip_Long_Lines     On
          DB                  /tail-db/tail-containers-state-sumo.db
          DB.Sync             Normal
      [INPUT]
          Name            systemd
          Tag             host.*
          DB              /tail-db/systemd-state-sumo.db
          Systemd_Filter  _SYSTEMD_UNIT=addon-config.service
          Systemd_Filter  _SYSTEMD_UNIT=addon-run.service
          Systemd_Filter  _SYSTEMD_UNIT=cfn-etcd-environment.service
          Systemd_Filter  _SYSTEMD_UNIT=cfn-signal.service
          Systemd_Filter  _SYSTEMD_UNIT=clean-ca-certificates.service
          Systemd_Filter  _SYSTEMD_UNIT=containerd.service
          Systemd_Filter  _SYSTEMD_UNIT=coreos-metadata.service
          Systemd_Filter  _SYSTEMD_UNIT=coreos-setup-environment.service
          Systemd_Filter  _SYSTEMD_UNIT=coreos-tmpfiles.service
          Systemd_Filter  _SYSTEMD_UNIT=dbus.service
          Systemd_Filter  _SYSTEMD_UNIT=docker.service
          Systemd_Filter  _SYSTEMD_UNIT=efs.service
          Systemd_Filter  _SYSTEMD_UNIT=etcd-member.service
          Systemd_Filter  _SYSTEMD_UNIT=etcd.service
          Systemd_Filter  _SYSTEMD_UNIT=etcd2.service
          Systemd_Filter  _SYSTEMD_UNIT=etcd3.service
          Systemd_Filter  _SYSTEMD_UNIT=etcdadm-check.service
          Systemd_Filter  _SYSTEMD_UNIT=etcdadm-reconfigure.service
          Systemd_Filter  _SYSTEMD_UNIT=etcdadm-save.service
          Systemd_Filter  _SYSTEMD_UNIT=etcdadm-update-status.service
          Systemd_Filter  _SYSTEMD_UNIT=flanneld.service
          Systemd_Filter  _SYSTEMD_UNIT=format-etcd2-volume.service
          Systemd_Filter  _SYSTEMD_UNIT=kube-node-taint-and-uncordon.service
          Systemd_Filter  _SYSTEMD_UNIT=kubelet.service
          Systemd_Filter  _SYSTEMD_UNIT=ldconfig.service
          Systemd_Filter  _SYSTEMD_UNIT=locksmithd.service
          Systemd_Filter  _SYSTEMD_UNIT=logrotate.service
          Systemd_Filter  _SYSTEMD_UNIT=lvm2-monitor.service
          Systemd_Filter  _SYSTEMD_UNIT=mdmon.service
          Systemd_Filter  _SYSTEMD_UNIT=nfs-idmapd.service
          Systemd_Filter  _SYSTEMD_UNIT=nfs-mountd.service
          Systemd_Filter  _SYSTEMD_UNIT=nfs-server.service
          Systemd_Filter  _SYSTEMD_UNIT=nfs-utils.service
          Systemd_Filter  _SYSTEMD_UNIT=node-problem-detector.service
          Systemd_Filter  _SYSTEMD_UNIT=ntp.service
          Systemd_Filter  _SYSTEMD_UNIT=oem-cloudinit.service
          Systemd_Filter  _SYSTEMD_UNIT=rkt-gc.service
          Systemd_Filter  _SYSTEMD_UNIT=rkt-metadata.service
          Systemd_Filter  _SYSTEMD_UNIT=rpc-idmapd.service
          Systemd_Filter  _SYSTEMD_UNIT=rpc-mountd.service
          Systemd_Filter  _SYSTEMD_UNIT=rpc-statd.service
          Systemd_Filter  _SYSTEMD_UNIT=rpcbind.service
          Systemd_Filter  _SYSTEMD_UNIT=set-aws-environment.service
          Systemd_Filter  _SYSTEMD_UNIT=system-cloudinit.service
          Systemd_Filter  _SYSTEMD_UNIT=systemd-timesyncd.service
          Systemd_Filter  _SYSTEMD_UNIT=update-ca-certificates.service
          Systemd_Filter  _SYSTEMD_UNIT=user-cloudinit.service
          Systemd_Filter  _SYSTEMD_UNIT=var-lib-etcd2.service
          Max_Entries     1000
          Read_From_Tail  true

The logs are in docker format, so I am using docker_mode and Docker_Mode_Parser

is there something I am missing?