splunk / splunk-connect-for-kubernetes

Helm charts associated with kubernetes plug-ins
Apache License 2.0
344 stars 270 forks source link

Setting volume ownership for /var/lib/kubelet/pods/157fccbb-05dd-4838-aa1b-11072dada410/volumes/kubernetes.io~downward-api/istio-podinfo and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow #849

Closed kavita1205 closed 1 year ago

kavita1205 commented 1 year ago

Hi Team,

I am getting this message in splunk instead of pod logs. Can some please tell me how to fix this issue.

W0206 08:55:05.997292    4708 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/157fccbb-05dd-4838-aa1b-11072dada410/volumes/kubernetes.io~downward-api/istio-podinfo and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699

Values.yaml

COMPUTED VALUES:
global:
  logLevel: info
  splunk:
    hec:
      gzip_compression: false
      host: splunk-hec.oi.****.com
      insecureSSL: true
      port: 8088
      protocol: https
      token: 779EE032-1473-40F8-AA19-*******
splunk-kubernetes-logging:
  affinity: {}
  buffer:
    '@type': memory
    chunk_limit_records: 100000
    chunk_limit_size: 20m
    flush_interval: 5s
    flush_thread_count: 1
    overflow_action: block
    retry_max_times: 5
    retry_type: periodic
    retry_wait: 30
    total_limit_size: 600m
  bufferChunkKeys:
  - index
  charEncodingUtf8: false
  containers:
    enableStatWatcher: true
    localTime: false
    logFormat: '%Y-%m-%dT%H:%M:%S.%N%:z'
    logFormatType: cri
    path: /var/log
    pathDest: /var/lib/docker/containers
    refreshInterval: null
    removeBlankEvents: true
  customFilters: {}
  customMetadata: null
  customMetadataAnnotations: null
  enabled: true
  environmentVar: null
  extraLabels: null
  extraVolumeMounts: []
  extraVolumes: []
  fluentd:
    path: /var/log/containers/*.log
  fullnameOverride: lv-splunk-logging
  global:
    kubernetes:
      clusterName: ****-ml-lv
    logLevel: info
    metrics:
      service:
        enabled: true
        headless: true
    monitoring_agent_enabled: true
    prometheus_enabled: true
    serviceMonitor:
      additionalLabels: {}
      enabled: false
      interval: ""
      metricsPort: 24231
      scrapeTimeout: 10s
    splunk:
      hec:
        gzip_compression: false
        host: splunk-hec.oi.***.com
        insecureSSL: true
        port: 8088
        protocol: https
        token: 779EE032-1473-40F8-AA19-*****
  image:
    name: splunk/fluentd-hec
    pullPolicy: IfNotPresent
    registry: docker.io
    tag: 1.3.1
    usePullSecret: false
  indexFields: []
  journalLogPath: /var/log/journal
  k8sMetadata:
    cache_ttl: 3600
    podLabels:
    - app
    - k8s-app
    - release
    propagate_namespace_labels: false
    watch: true
  kubernetes:
    clusterName: ****-ml-lv
    securityContext: false
  logLevel: null
  logs:
    dns-controller:
      from:
        pod: dns-controller
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:dns-controller
    dns-sidecar:
      from:
        container: sidecar
        pod: kube-dns
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kubedns-sidecar
    dnsmasq:
      from:
        pod: kube-dns
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:dnsmasq
    docker:
      from:
        journald:
          unit: docker.service
      sourcetype: kube:docker
    etcd:
      from:
        container: etcd-container
        pod: etcd-server
    etcd-events:
      from:
        container: etcd-container
        pod: etcd-server-events
    etcd-minikube:
      from:
        container: etcd
        pod: etcd-minikube
    kube-apiserver:
      from:
        pod: kube-apiserver
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kube-apiserver
    kube-audit:
      from:
        file:
          path: /var/log/kube-apiserver-audit.log
      sourcetype: kube:apiserver-audit
      timestampExtraction:
        format: '%Y-%m-%dT%H:%M:%SZ'
    kube-controller-manager:
      from:
        pod: kube-controller-manager
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kube-controller-manager
    kube-dns-autoscaler:
      from:
        container: autoscaler
        pod: kube-dns-autoscaler
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kube-dns-autoscaler
    kube-proxy:
      from:
        pod: kube-proxy
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kube-proxy
    kube-scheduler:
      from:
        pod: kube-scheduler
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kube-scheduler
    kubedns:
      from:
        pod: kube-dns
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kubedns
    kubelet:
      from:
        journald:
          unit: kubelet.service
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kubelet
  namespace: null
  nodeSelector:
    beta.kubernetes.io/os: linux
  podAnnotations: null
  podSecurityPolicy:
    apparmor_security: true
    create: false
  priorityClassName: null
  rbac:
    create: true
    openshiftPrivilegedSccBinding: false
  resources:
    requests:
      cpu: 100m
      memory: 200Mi
  rollingUpdate: null
  secret:
    create: true
  sendAllMetadata: false
  serviceAccount:
    create: true
  sourcetypePrefix: kube
  splunk:
    hec:
      caFile: null
      clientCert: null
      clientKey: null
      consume_chunk_on_4xx_errors: null
      fullUrl: null
      gzip_compression: null
      host: splunk-hec.oi.****.com
      indexName: ml_logs
      indexRouting: false
      indexRoutingDefaultIndex: default
      insecureSSL: true
      port: 8088
      protocol: https
      token: 779EE032-1473-40F8-AA19-*****
    ingest_api: {}
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
splunk-kubernetes-metrics:
  affinity: {}
  aggregatorBuffer:
    '@type': memory
    chunk_limit_records: 10000
    chunk_limit_size: 100m
    flush_interval: 5s
    flush_thread_count: 1
    overflow_action: block
    retry_max_times: 10
    retry_type: periodic
    retry_wait: 30
    total_limit_size: 400m
  aggregatorNodeSelector:
    beta.kubernetes.io/os: linux
  aggregatorTolerations: {}
  buffer:
    '@type': memory
    chunk_limit_records: 10000
    chunk_limit_size: 100m
    flush_interval: 5s
    flush_thread_count: 1
    overflow_action: block
    retry_max_times: 10
    retry_type: periodic
    retry_wait: 30
    total_limit_size: 400m
  customFilters: {}
  customFiltersAggr: {}
  enabled: true
  environmentVar: null
  environmentVarAgg: null
  extraLabels: null
  extraLabelsAgg: null
  fullnameOverride: lv-splunk-metrics
  global:
    kubernetes:
      clusterName: ****-m***-lv
    logLevel: info
    monitoring_agent_enabled: false
    monitoring_agent_index_name: false
    prometheus_enabled: true
    splunk:
      hec:
        gzip_compression: false
        host: splunk-hec.oi.****.com
        insecureSSL: true
        port: 8088
        protocol: https
        token: 779EE032-1473-40F8-AA19-*****
  image:
    name: splunk/k8s-metrics
    pullPolicy: IfNotPresent
    registry: docker.io
    tag: 1.2.1
    usePullSecret: false
  imageAgg:
    name: splunk/k8s-metrics-aggr
    pullPolicy: IfNotPresent
    registry: docker.io
    tag: 1.2.1
    usePullSecret: false
  kubernetes:
    bearerTokenFile: null
    caFile: null
    clusterName: x***-m**-lv
    insecureSSL: true
    kubeletAddress: '"#{ENV[''KUBERNETES_NODE_IP'']}"'
    kubeletPort: 10250
    kubeletPortAggregator: null
    secretDir: null
    useRestClientSSL: true
  logLevel: null
  metricsInterval: 60s
  namespace: null
  nodeSelector:
    beta.kubernetes.io/os: linux
  podAnnotations: null
  podAnnotationsAgg: null
  podSecurityPolicy:
    apparmor_security: true
    create: false
  priorityClassName: null
  priorityClassNameAgg: null
  rbac:
    create: true
  resources:
    fluent:
      limits:
        cpu: 200m
        memory: 300Mi
      requests:
        cpu: 200m
        memory: 300Mi
  rollingUpdate: null
  secret:
    create: true
  serviceAccount:
    create: true
    name: splunk-kubernetes-metrics
    usePullSecrets: false
  splunk:
    hec:
      caFile: null
      clientCert: null
      clientKey: null
      consume_chunk_on_4xx_errors: null
      fullUrl: null
      host: splunk-hec.oi.***.com
      indexName: em_metrics
      insecureSSL: true
      port: 8088
      protocol: null
      token: 779EE&****-1473-40F8-AA19-UU*****
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
splunk-kubernetes-objects:
  enabled: false
  fullnameOverride: lv-splunk-object
  kubernetes:
    clusterName: xXXXi-ml-JJ
    insecureSSL: true
  objects:
    apps:
      v1:
      - interval: 60s
        name: daemon_sets
    core:
      v1:
      - interval: 60s
        name: pods
      - interval: 60s
        name: nodes
  rbac:
    create: true
  serviceAccount:
    create: true
    name: splunk-kubernetes-objects
  splunk:
    hec:
      indexName: em_meta

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

hvaghani221 commented 1 year ago

I am getting this message in splunk instead of pod logs.

Does that mean log collection is not working? Can you share logs of SCK pods?

kavita1205 commented 1 year ago

Hi @harshit-splunk ,

Please find logs below.

2023-02-07 10:29:40 +0000 [info]: init supervisor logger path=nil rotate_age=nil rotate_size=nil
2023-02-07 10:29:40 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
2023-02-07 10:29:40 +0000 [info]: gem 'fluentd' version '1.15.3'
2023-02-07 10:29:40 +0000 [info]: gem 'fluent-plugin-concat' version '2.4.0'
2023-02-07 10:29:40 +0000 [info]: gem 'fluent-plugin-jq' version '0.5.1'
2023-02-07 10:29:40 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '3.1.0'
2023-02-07 10:29:40 +0000 [info]: gem 'fluent-plugin-prometheus' version '2.0.2'
2023-02-07 10:29:40 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2023-02-07 10:29:40 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.3.1'
2023-02-07 10:29:40 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2023-02-07 10:29:40 +0000 [INFO]: Reading bearer token from /var/run/secrets/kubernetes.io/serviceaccount/token
2023-02-07 10:29:41 +0000 [info]: using configuration file: <ROOT>
  <system>
    log_level info
    root_dir "/tmp/fluentd"
  </system>
  <source>
    @id containers.log
    @type tail
    @label @CONCAT
    tag "tail.containers.*"
    path "/var/log/containers/*.log"
    pos_file "/var/log/splunk-fluentd-containers.log.pos"
    path_key "source"
    read_from_head true
    enable_stat_watcher true
    refresh_interval 60
    <parse>
      @type "regexp"
      expression /^(?<time>[^\s]+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
      time_format "%Y-%m-%dT%H:%M:%S.%N%:z"
      time_key "time"
      time_type string
      localtime false
      unmatched_lines
    </parse>
  </source>
  <source>
    @id tail.file.kube-audit
    @type tail
    @label @CONCAT
    tag "tail.file.kube:apiserver-audit"
    path "/var/log/kube-apiserver-audit.log"
    pos_file "/var/log/splunk-fluentd-kube-audit.pos"
    read_from_head true
    path_key "source"
    <parse>
      @type "regexp"
      expression /^(?<log>.*)$/
      time_key "time"
      time_type string
      time_format "%Y-%m-%dT%H:%M:%SZ"
      unmatched_lines
    </parse>
  </source>
  <source>
    @id journald-docker
    @type systemd
    @label @CONCAT
    tag "journald.kube:docker"
    path "/var/log/journal"
    matches [{"_SYSTEMD_UNIT":"docker.service"}]
    read_from_head true
    <storage>
      @type "local"
      persistent true
      path "/var/log/splunkd-fluentd-journald-docker.pos.json"
    </storage>
    <entry>
      field_map {"MESSAGE":"log","_SYSTEMD_UNIT":"source"}
      field_map_strict true
    </entry>
  </source>
  <source>
    @id journald-kubelet
    @type systemd
    @label @CONCAT
    tag "journald.kube:kubelet"
    path "/var/log/journal"
    matches [{"_SYSTEMD_UNIT":"kubelet.service"}]
    read_from_head true
    <storage>
      @type "local"
      persistent true
      path "/var/log/splunkd-fluentd-journald-kubelet.pos.json"
    </storage>
    <entry>
      field_map {"MESSAGE":"log","_SYSTEMD_UNIT":"source"}
      field_map_strict true
    </entry>
  </source>
  <source>
    @id fluentd-monitor-agent
    @type monitor_agent
    @label @PARSE
    bind "0.0.0.0"
    port 24220
    tag "monitor_agent"
  </source>
  <label @CONCAT>
    <filter tail.containers.var.log.containers.**>
      @type concat
      key "log"
      partial_key "logtag"
      partial_value "P"
      separator ""
      timeout_label "@PARSE"
    </filter>
    <filter tail.containers.var.log.containers.dns-controller*dns-controller*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-dns*sidecar*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-dns*dnsmasq*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-apiserver*kube-apiserver*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-controller-manager*kube-controller-manager*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-dns-autoscaler*autoscaler*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-proxy*kube-proxy*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-scheduler*kube-scheduler*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-dns*kubedns*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter journald.kube:kubelet>
      @type concat
      key "log"
      timeout_label "@PARSE"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
    </filter>
    <match **>
      @type relabel
      @label @PARSE
    </match>
  </label>
  <label @PARSE>
    <filter tail.containers.**>
      @type grep
      <exclude>
        key "log"
        pattern \A\z
      </exclude>
    </filter>
    <filter tail.containers.**>
      @type kubernetes_metadata
      annotation_match [".*"]
      de_dot false
      watch true
      cache_ttl 3600
    </filter>
    <filter tail.containers.**>
      @type record_transformer
      enable_ruby
      <record>
        sourcetype ${record.dig("kubernetes", "annotations", "splunk.com/sourcetype") ? record.dig("kubernetes", "annotations", "splunk.com/sourcetype") : "kube:container:"+record.dig("kubernetes","container_name")}
        container_name ${record.dig("kubernetes","container_name")}
        namespace ${record.dig("kubernetes","namespace_name")}
        pod ${record.dig("kubernetes","pod_name")}
        container_id ${record.dig("docker","container_id")}
        pod_uid ${record.dig("kubernetes","pod_id")}
        container_image ${record.dig("kubernetes","container_image")}
        cluster_name ****-ml-lv
        splunk_index ${record.dig("kubernetes", "annotations", "splunk.com/index."+record.dig("kubernetes","container_name")) ? record.dig("kubernetes", "annotations", "splunk.com/index."+record.dig("kubernetes","container_name")) : record.dig("kubernetes", "annotations", "splunk.com/index") ? record.dig("kubernetes", "annotations", "splunk.com/index") : record.dig("kubernetes", "namespace_annotations", "splunk.com/index") ? (record["kubernetes"]["namespace_annotations"]["splunk.com/index"]) : ("ml_logs")}
        label_app ${record.dig("kubernetes","labels","app")}
        label_k8s-app ${record.dig("kubernetes","labels","k8s-app")}
        label_release ${record.dig("kubernetes","labels","release")}
        exclude_list ${record.dig("kubernetes", "annotations", "splunk.com/exclude") ? record.dig("kubernetes", "annotations", "splunk.com/exclude") : record.dig("kubernetes", "namespace_annotations", "splunk.com/exclude") ? (record["kubernetes"]["namespace_annotations"]["splunk.com/exclude"]) : ("false")}
      </record>
    </filter>
    <filter tail.containers.**>
      @type grep
      <exclude>
        key "exclude_list"
        pattern /^true$/
      </exclude>
    </filter>
    <filter tail.containers.var.log.pods.**>
      @type jq_transformer
      jq ".record | . + (.source | capture(\"/var/log/pods/(?<pod_uid>[^/]+)/(?<container_name>[^/]+)/(?<container_retry>[0-9]+).log\")) | .sourcetype = (\"kube:container:\" + .container_name) | .splunk_index = \"ml_logs\""
    </filter>
    <filter tail.containers.var.log.containers.dns-controller*dns-controller*.log>
      @type record_transformer
      <record>
        sourcetype kube:dns-controller
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-dns*sidecar*.log>
      @type record_transformer
      <record>
        sourcetype kube:kubedns-sidecar
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-dns*dnsmasq*.log>
      @type record_transformer
      <record>
        sourcetype kube:dnsmasq
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-apiserver*kube-apiserver*.log>
      @type record_transformer
      <record>
        sourcetype kube:kube-apiserver
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-controller-manager*kube-controller-manager*.log>
      @type record_transformer
      <record>
        sourcetype kube:kube-controller-manager
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-dns-autoscaler*autoscaler*.log>
      @type record_transformer
      <record>
        sourcetype kube:kube-dns-autoscaler
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-proxy*kube-proxy*.log>
      @type record_transformer
      <record>
        sourcetype kube:kube-proxy
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-scheduler*kube-scheduler*.log>
      @type record_transformer
      <record>
        sourcetype kube:kube-scheduler
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-dns*kubedns*.log>
      @type record_transformer
      <record>
        sourcetype kube:kubedns
      </record>
    </filter>
    <filter journald.**>
      @type jq_transformer
      jq ".record.source = \"/var/log/journal/\" + .record.source | .record.sourcetype = (.tag | ltrimstr(\"journald.\")) | .record.cluster_name = \"x***-ml-lv\" | .record.splunk_index = \"ml_logs\" |.record"
    </filter>
    <filter tail.file.**>
      @type jq_transformer
      jq ".record.sourcetype = (.tag | ltrimstr(\"tail.file.\")) | .record.cluster_name = \"****-ml-lv\" | .record.splunk_index = \"ml_logs\" | .record"
    </filter>
    <filter monitor_agent>
      @type jq_transformer
      jq ".record.source = \"namespace:splunk-sck/pod:lv-splunk-logging-bfnjl\" | .record.sourcetype = \"fluentd:monitor-agent\" | .record.cluster_name = \"***-ml-lv\" | .record.splunk_index = \"ml_logs\" | .record"
    </filter>
    <match **>
      @type relabel
      @label @SPLUNK
    </match>
  </label>
  <label @SPLUNK>
    <match **>
      @type splunk_hec
      protocol https
      hec_host "splunk-hec.oi.com"
      consume_chunk_on_4xx_errors true
      hec_port 8088
      hec_token xxxxxx
      index_key "splunk_index"
      insecure_ssl true
      host "las2-mlgpu34"
      source_key "source"
      sourcetype_key "sourcetype"
      app_name "splunk-kubernetes-logging"
      app_version "1.5.2"
      <fields>
        container_retry
        pod_uid
        pod
        container_name
        namespace
        container_id
        cluster_name
        label_app
        label_k8s-app
        label_release
      </fields>
      <buffer index>
        @type "memory"
        chunk_limit_records 100000
        chunk_limit_size 20m
        flush_interval 5s
        flush_thread_count 1
        overflow_action block
        retry_max_times 5
        retry_type periodic
        retry_wait 30
        total_limit_size 600m
      </buffer>
      <format monitor_agent>
        @type "json"
      </format>
      <format>
        @type "single_value"
        message_key "log"
        add_newline false
      </format>
    </match>
  </label>
  <source>
    @type prometheus
  </source>
  <source>
    @type forward
  </source>
  <source>
    @type prometheus_monitor
    <labels>
      host ${hostname}
    </labels>
  </source>
  <source>
    @type prometheus_output_monitor
    <labels>
      host ${hostname}
    </labels>
  </source>
</ROOT>
2023-02-07 10:29:41 +0000 [info]: starting fluentd-1.15.3 pid=1 ruby="2.7.6"
2023-02-07 10:29:41 +0000 [info]: spawn command to main:  cmdline=["/usr/bin/ruby", "-r/usr/local/share/gems/gems/bundler-2.3.26/lib/bundler/setup", "-Eascii-8bit:ascii-8bit", "/usr/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "--under-supervisor"]
2023-02-07 10:29:41 +0000 [info]: init supervisor logger path=nil rotate_age=nil rotate_size=nil
2023-02-07 10:29:41 +0000 [info]: #0 init worker0 logger path=nil rotate_age=nil rotate_size=nil
2023-02-07 10:29:42 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.**" type="concat"
2023-02-07 10:29:42 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.dns-controller*dns-controller*.log" type="concat"
2023-02-07 10:29:42 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-dns*sidecar*.log" type="concat"
2023-02-07 10:29:42 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-dns*dnsmasq*.log" type="concat"
2023-02-07 10:29:42 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-apiserver*kube-apiserver*.log" type="concat"
2023-02-07 10:29:42 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-controller-manager*kube-controller-manager*.log" type="concat"
2023-02-07 10:29:42 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-dns-autoscaler*autoscaler*.log" type="concat"
2023-02-07 10:29:42 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-proxy*kube-proxy*.log" type="concat"
2023-02-07 10:29:42 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-scheduler*kube-scheduler*.log" type="concat"
2023-02-07 10:29:42 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-dns*kubedns*.log" type="concat"
2023-02-07 10:29:42 +0000 [info]: adding filter in @CONCAT pattern="journald.kube:kubelet" type="concat"
2023-02-07 10:29:42 +0000 [info]: adding match in @CONCAT pattern="**" type="relabel"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.**" type="grep"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.**" type="kubernetes_metadata"
2023-02-07 10:29:42 +0000 [INFO]: Reading bearer token from /var/run/secrets/kubernetes.io/serviceaccount/token
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.**" type="record_transformer"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.**" type="grep"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.pods.**" type="jq_transformer"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.dns-controller*dns-controller*.log" type="record_transformer"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-dns*sidecar*.log" type="record_transformer"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-dns*dnsmasq*.log" type="record_transformer"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-apiserver*kube-apiserver*.log" type="record_transformer"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-controller-manager*kube-controller-manager*.log" type="record_transformer"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-dns-autoscaler*autoscaler*.log" type="record_transformer"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-proxy*kube-proxy*.log" type="record_transformer"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-scheduler*kube-scheduler*.log" type="record_transformer"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-dns*kubedns*.log" type="record_transformer"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="journald.**" type="jq_transformer"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="tail.file.**" type="jq_transformer"
2023-02-07 10:29:42 +0000 [info]: adding filter in @PARSE pattern="monitor_agent" type="jq_transformer"
2023-02-07 10:29:42 +0000 [info]: adding match in @PARSE pattern="**" type="relabel"
2023-02-07 10:29:42 +0000 [info]: adding match in @SPLUNK pattern="**" type="splunk_hec"
2023-02-07 10:29:42 +0000 [info]: adding source type="tail"
2023-02-07 10:29:42 +0000 [info]: adding source type="tail"
2023-02-07 10:29:42 +0000 [info]: adding source type="systemd"
2023-02-07 10:29:42 +0000 [info]: adding source type="systemd"
2023-02-07 10:29:42 +0000 [info]: adding source type="monitor_agent"
2023-02-07 10:29:42 +0000 [info]: adding source type="prometheus"
2023-02-07 10:29:42 +0000 [info]: adding source type="forward"
2023-02-07 10:29:42 +0000 [info]: adding source type="prometheus_monitor"
2023-02-07 10:29:42 +0000 [info]: adding source type="prometheus_output_monitor"
2023-02-07 10:29:42 +0000 [warn]: parameter 'de_dot' in <filter tail.containers.**>
  @type kubernetes_metadata
  annotation_match [".*"]
  de_dot false
  watch true
  cache_ttl 3600
</filter> is not used.
2023-02-07 10:29:42 +0000 [info]: #0 starting fluentd worker pid=21 ppid=1 worker=0
2023-02-07 10:29:42 +0000 [info]: #0 listening port port=24224 bind="0.0.0.0"
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-metrics-9gd6z_splunk-sck_splunk-fluentd-k8s-metrics-e61d84facc1f050d7c205c8a69182dc502e3c9be70ade57aba7e464e2417951d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-logging-bfnjl_splunk-sck_splunk-fluentd-k8s-logs-64c43b1563c8d1f846c0ae09f46eaf247894d2ac553309508d5a813b92ef6449.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_upgrade-ipam-21feab851ae0efcf10a36dc6ed503cb1717d174d3a01b8ec0d9e4c12c65de0dc.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_install-cni-268a35ed8016ab557d9d3794f5fd5364990983242c9fa0bb52607876b728fc6a.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_flexvol-driver-e15fe074d484147033e2a9a87aad524ee6a2b3427dc5c8efa2573b8448bbd442.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_calico-node-8de0412514ac31d46822bb2460b6f78f984aaff7514161ab972bc73bd7c31e82.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/kube-proxy-sl69k_kube-system_kube-proxy-bd8b8e171d5024c503e295615cbf1f78060977f4f24adc6a7671bb3ecdc5ae2d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/ml42minions-34-deployment-66bbd6db5b-wgc46_ml42-lv-prod_ml42minions-34-pod-d50fa1483bfbca77453a8973f8bccc940f18e3ad3420c77b33b9c1718bafa70e.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-metrics-9gd6z_splunk-sck_splunk-fluentd-k8s-metrics-e61d84facc1f050d7c205c8a69182dc502e3c9be70ade57aba7e464e2417951d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-logging-bfnjl_splunk-sck_splunk-fluentd-k8s-logs-64c43b1563c8d1f846c0ae09f46eaf247894d2ac553309508d5a813b92ef6449.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_upgrade-ipam-21feab851ae0efcf10a36dc6ed503cb1717d174d3a01b8ec0d9e4c12c65de0dc.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_install-cni-268a35ed8016ab557d9d3794f5fd5364990983242c9fa0bb52607876b728fc6a.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_flexvol-driver-e15fe074d484147033e2a9a87aad524ee6a2b3427dc5c8efa2573b8448bbd442.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_calico-node-8de0412514ac31d46822bb2460b6f78f984aaff7514161ab972bc73bd7c31e82.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/kube-proxy-sl69k_kube-system_kube-proxy-bd8b8e171d5024c503e295615cbf1f78060977f4f24adc6a7671bb3ecdc5ae2d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [warn]: #0 [containers.log] /var/log/containers/ml42minions-34-deployment-66bbd6db5b-wgc46_ml42-lv-prod_ml42minions-34-pod-d50fa1483bfbca77453a8973f8bccc940f18e3ad3420c77b33b9c1718bafa70e.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:29:42 +0000 [info]: #0 fluentd worker is now running worker=0
2023-02-07 10:29:48 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:29:54 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:30:05 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:30:15 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:30:25 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:30:35 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:30:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-metrics-9gd6z_splunk-sck_splunk-fluentd-k8s-metrics-e61d84facc1f050d7c205c8a69182dc502e3c9be70ade57aba7e464e2417951d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:30:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-logging-bfnjl_splunk-sck_splunk-fluentd-k8s-logs-64c43b1563c8d1f846c0ae09f46eaf247894d2ac553309508d5a813b92ef6449.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:30:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_upgrade-ipam-21feab851ae0efcf10a36dc6ed503cb1717d174d3a01b8ec0d9e4c12c65de0dc.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:30:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_install-cni-268a35ed8016ab557d9d3794f5fd5364990983242c9fa0bb52607876b728fc6a.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:30:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_flexvol-driver-e15fe074d484147033e2a9a87aad524ee6a2b3427dc5c8efa2573b8448bbd442.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:30:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_calico-node-8de0412514ac31d46822bb2460b6f78f984aaff7514161ab972bc73bd7c31e82.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:30:42 +0000 [warn]: #0 [containers.log] /var/log/containers/kube-proxy-sl69k_kube-system_kube-proxy-bd8b8e171d5024c503e295615cbf1f78060977f4f24adc6a7671bb3ecdc5ae2d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:30:42 +0000 [warn]: #0 [containers.log] /var/log/containers/ml42minions-34-deployment-66bbd6db5b-wgc46_ml42-lv-prod_ml42minions-34-pod-d50fa1483bfbca77453a8973f8bccc940f18e3ad3420c77b33b9c1718bafa70e.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:30:45 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:30:57 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:31:06 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:31:16 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:31:26 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:31:37 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:31:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-metrics-9gd6z_splunk-sck_splunk-fluentd-k8s-metrics-e61d84facc1f050d7c205c8a69182dc502e3c9be70ade57aba7e464e2417951d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:31:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-logging-bfnjl_splunk-sck_splunk-fluentd-k8s-logs-64c43b1563c8d1f846c0ae09f46eaf247894d2ac553309508d5a813b92ef6449.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:31:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_upgrade-ipam-21feab851ae0efcf10a36dc6ed503cb1717d174d3a01b8ec0d9e4c12c65de0dc.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:31:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_install-cni-268a35ed8016ab557d9d3794f5fd5364990983242c9fa0bb52607876b728fc6a.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:31:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_flexvol-driver-e15fe074d484147033e2a9a87aad524ee6a2b3427dc5c8efa2573b8448bbd442.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:31:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_calico-node-8de0412514ac31d46822bb2460b6f78f984aaff7514161ab972bc73bd7c31e82.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:31:42 +0000 [warn]: #0 [containers.log] /var/log/containers/kube-proxy-sl69k_kube-system_kube-proxy-bd8b8e171d5024c503e295615cbf1f78060977f4f24adc6a7671bb3ecdc5ae2d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:31:42 +0000 [warn]: #0 [containers.log] /var/log/containers/ml42minions-34-deployment-66bbd6db5b-wgc46_ml42-lv-prod_ml42minions-34-pod-d50fa1483bfbca77453a8973f8bccc940f18e3ad3420c77b33b9c1718bafa70e.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:31:47 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:31:57 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:32:07 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:32:17 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:32:28 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:32:38 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:32:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-metrics-9gd6z_splunk-sck_splunk-fluentd-k8s-metrics-e61d84facc1f050d7c205c8a69182dc502e3c9be70ade57aba7e464e2417951d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:32:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-logging-bfnjl_splunk-sck_splunk-fluentd-k8s-logs-64c43b1563c8d1f846c0ae09f46eaf247894d2ac553309508d5a813b92ef6449.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:32:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_upgrade-ipam-21feab851ae0efcf10a36dc6ed503cb1717d174d3a01b8ec0d9e4c12c65de0dc.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:32:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_install-cni-268a35ed8016ab557d9d3794f5fd5364990983242c9fa0bb52607876b728fc6a.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:32:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_flexvol-driver-e15fe074d484147033e2a9a87aad524ee6a2b3427dc5c8efa2573b8448bbd442.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:32:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_calico-node-8de0412514ac31d46822bb2460b6f78f984aaff7514161ab972bc73bd7c31e82.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:32:42 +0000 [warn]: #0 [containers.log] /var/log/containers/kube-proxy-sl69k_kube-system_kube-proxy-bd8b8e171d5024c503e295615cbf1f78060977f4f24adc6a7671bb3ecdc5ae2d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:32:42 +0000 [warn]: #0 [containers.log] /var/log/containers/ml42minions-34-deployment-66bbd6db5b-wgc46_ml42-lv-prod_ml42minions-34-pod-d50fa1483bfbca77453a8973f8bccc940f18e3ad3420c77b33b9c1718bafa70e.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:32:48 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:32:58 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:33:08 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:33:18 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:33:29 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:33:39 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:33:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-metrics-9gd6z_splunk-sck_splunk-fluentd-k8s-metrics-e61d84facc1f050d7c205c8a69182dc502e3c9be70ade57aba7e464e2417951d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:33:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-logging-bfnjl_splunk-sck_splunk-fluentd-k8s-logs-64c43b1563c8d1f846c0ae09f46eaf247894d2ac553309508d5a813b92ef6449.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:33:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_upgrade-ipam-21feab851ae0efcf10a36dc6ed503cb1717d174d3a01b8ec0d9e4c12c65de0dc.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:33:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_install-cni-268a35ed8016ab557d9d3794f5fd5364990983242c9fa0bb52607876b728fc6a.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:33:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_flexvol-driver-e15fe074d484147033e2a9a87aad524ee6a2b3427dc5c8efa2573b8448bbd442.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:33:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_calico-node-8de0412514ac31d46822bb2460b6f78f984aaff7514161ab972bc73bd7c31e82.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:33:42 +0000 [warn]: #0 [containers.log] /var/log/containers/kube-proxy-sl69k_kube-system_kube-proxy-bd8b8e171d5024c503e295615cbf1f78060977f4f24adc6a7671bb3ecdc5ae2d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:33:42 +0000 [warn]: #0 [containers.log] /var/log/containers/ml42minions-34-deployment-66bbd6db5b-wgc46_ml42-lv-prod_ml42minions-34-pod-d50fa1483bfbca77453a8973f8bccc940f18e3ad3420c77b33b9c1718bafa70e.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:33:49 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:33:59 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:34:09 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:34:21 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:34:30 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:34:40 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:34:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-metrics-9gd6z_splunk-sck_splunk-fluentd-k8s-metrics-e61d84facc1f050d7c205c8a69182dc502e3c9be70ade57aba7e464e2417951d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:34:42 +0000 [warn]: #0 [containers.log] /var/log/containers/lv-splunk-logging-bfnjl_splunk-sck_splunk-fluentd-k8s-logs-64c43b1563c8d1f846c0ae09f46eaf247894d2ac553309508d5a813b92ef6449.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:34:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_upgrade-ipam-21feab851ae0efcf10a36dc6ed503cb1717d174d3a01b8ec0d9e4c12c65de0dc.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:34:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_install-cni-268a35ed8016ab557d9d3794f5fd5364990983242c9fa0bb52607876b728fc6a.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:34:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_flexvol-driver-e15fe074d484147033e2a9a87aad524ee6a2b3427dc5c8efa2573b8448bbd442.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:34:42 +0000 [warn]: #0 [containers.log] /var/log/containers/calico-node-t526r_kube-system_calico-node-8de0412514ac31d46822bb2460b6f78f984aaff7514161ab972bc73bd7c31e82.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:34:42 +0000 [warn]: #0 [containers.log] /var/log/containers/kube-proxy-sl69k_kube-system_kube-proxy-bd8b8e171d5024c503e295615cbf1f78060977f4f24adc6a7671bb3ecdc5ae2d.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:34:42 +0000 [warn]: #0 [containers.log] /var/log/containers/ml42minions-34-deployment-66bbd6db5b-wgc46_ml42-lv-prod_ml42minions-34-pod-d50fa1483bfbca77453a8973f8bccc940f18e3ad3420c77b33b9c1718bafa70e.log unreadable. It is excluded and would be examined next time.
2023-02-07 10:34:50 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:35:00 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:35:11 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-02-07 10:35:21 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
kavita1205 commented 1 year ago

Adding onto it.

Last time, When I raised almost similar issue #763 you asked to change logformattype = cri instead of json. At that time, I was setting up SCK for k8 version 1.24, but this time I am setting up SCK for k8 version 1.17 and using same logformattype i.e. cri. But , this time I am again getting below logs for few pods.

2023-02-07 10:29:46 +0000 [warn]: #0 [containers.log] pattern not matched: "{\"log\":\"2023-02-07 10:29:44 +0000 [warn]: #0 [containers.log] pattern not matched: \\\"{\\\\\\\"log\\\\\\\":\\\\\\\"2023-02-07 10:29:44 +0000 [warn]: #0 [containers.log] pattern not matched: \\\\\\\\\\\\\\\"{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"log\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2023-02-07 10:29:44 +0000 [warn]: #0 [containers.log] pattern not matched: \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"log\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2023-02-07 10:29:43 +0000 [warn]: #0 [containers.log] pattern not matched: \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
kavita1205 commented 1 year ago

@harshit-splunk would really be appreciable if you could response over it today itself.

hvaghani221 commented 1 year ago

Here it looks like the logs are in JSON format. Can you specify what container runtime are you using?

kavita1205 commented 1 year ago

@harshit-splunk - container runtime is docker://19.3.15.

kavita1205 commented 1 year ago

strange , I am using logformattype cri not json.

hvaghani221 commented 1 year ago

With docker, you should use json format as docker runtime is producing logs in json format. You can also observe it in the connector log: "{\"log\":\"2023-02-07 10:29:44 +0000

hvaghani221 commented 1 year ago

Switching to json will fix the issue. Feel free to reopen this issue or create a new one.

hvaghani221 commented 1 year ago

FYI, Splunk Connect for Kubernetes is going to be deprecated in January 2024(https://github.com/splunk/splunk-connect-for-kubernetes#end-of-support).

I would recommend moving to Splunk OpenTelemetry Collector for Kubernetes. You can refer to this migration guide for more details.