kube-logging / logging-operator

Logging operator for Kubernetes
https://kube-logging.dev
Apache License 2.0
1.53k stars 326 forks source link

Using Elasticsearch client 8.11.0 is not compatible for your Elasticsearch server #1706

Closed busyboy77 closed 4 months ago

busyboy77 commented 5 months ago

Just opening a new bug so that you may find sometime to fix it..

I have deployed bitnami Elasticsearch stack running docker.io/bitnami/elasticsearch:8.12.2-debian-12-r0 however, the logging operator is showing below given error when I try to push logs.

2024-03-22 15:43:13 +0000 [info]: adding filter in @0f24568f7a85f811f71650c48d07e851 pattern="**" type="dedot"
2024-03-22 15:43:13 +0000 [info]: #0 [clusterflow:logging:cluster-flow-es:1] DeDot will recurse nested hashes and arrays
2024-03-22 15:43:13 +0000 [info]: adding match in @0f24568f7a85f811f71650c48d07e851 pattern="**" type="elasticsearch"
2024-03-22 15:43:13 +0000 [error]: #0 config error file="/fluentd/etc/fluent.conf" error_class=Fluent::ConfigError error="Using Elasticsearch client 8.11.0 is not compatible for your Elasticsearch server. Please check your using elasticsearch gem version and Elasticsearch server."
2024-03-22 15:43:13 +0000 [error]: Worker 0 exited unexpectedly with status 2
2024-03-22 15:43:13 +0000 [info]: Received graceful stop

I'm running helm based deployment of Logging-Operator version 4.5.6 with below given configs.

Logging and FluentbitAgent

apiVersion: logging.banzaicloud.io/v1beta1
kind: FluentbitAgent
metadata:
  name: expertflow-fluentbit-agent
spec:
    bufferStorage: {}
    bufferStorageVolume:
      hostPath:
        path: ""
    bufferVolumeImage: {}
    filterKubernetes: {}
    image: {}
    inputTail:
      storage.type: filesystem
    positiondb:
      hostPath:
        path: ""
    resources: {}
    updateStrategy: {}

---
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
  name: fluentd-bit-logging
  namespace: logging
spec:
  enableRecreateWorkloadOnImmutableFieldChange: true
  fluentd:
    bufferStorageVolume:
      pvc:
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 40Gi
  controlNamespace: logging

my clusterFlow and ClusterOutputs

# ClusterFlows to deploy
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
  name: cluster-flow-es
  namespace: logging
  labels:
    someLabel: foo
spec:
      filters:
        - record_modifier: # if you e.g. have multiple clusters
            records:
              - cluster: "CLUSTER_NAME"
        # replaces dots in labels and annotations with dashes to avoid mapping issues (app=foo (text) vs. app.kubernetes.io/name=foo (object))
        # fixes error: existing mapping for [kubernetes.labels.app] must be of type object but found [text]
        - dedot:
            de_dot_separator: "-"
            de_dot_nested: true
      globalOutputRefs:
        - cluster-output-es

---

apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
  name: cluster-output-es
  namespace: logging
  labels:
    someLabel: foo
spec:
      elasticsearch:
        host: devops218.ef.com/es
        #port: 9200
        user: elastic
        index_name: efcx
        password:
          valueFrom:
            secretKeyRef:
              name: elastic-password   #kubectl -n logging create  secret generic  elastic-password  --from-literal=password=admin123
              key: password
        scheme: https
        ssl_verify: false
        logstash_format: true
        include_timestamp: true
        reconnect_on_error: true
        reload_on_failure: true
        buffer:
          flush_at_shutdown: true
          type: file
          chunk_limit_size: 4M # Determines HTTP payload size
          total_limit_size: 1024MB # Max total buffer size
          flush_mode: interval
          flush_interval: 10s
          flush_thread_count: 2 # Parallel send of logs
          overflow_action: block
          retry_forever: true # Never discard buffer chunks
          retry_type: exponential_backoff
          retry_max_interval: 60s
        # enables logging of bad request reasons within the fluentd log file (in the pod /fluentd/log/out)
        log_es_400_reason: true

/kind bug

busyboy77 commented 5 months ago

I'm now reverting back to ES 8.11.x of my ELk stack

pepov commented 5 months ago

I think we can upgrade the client lib to 8.12, but before we do it would nice if you could try this on your side while you have 8.11 and 8.12 servers as well:

Pull our fluentd docker image, make the following modifications and a build a new one:

fluent-gem install -N --version 8.12.0 elasticsearch
fluent-gem uninstall --version 8.11.0 elasticsearch
fluent-gem uninstall --version 8.11.0 elasticsearch-api

It would help me if you could try out this image with your 8.11 and the once upgraded then with your 8.12 server version as well.

pepov commented 4 months ago

closing this due to inactivity, please reopen if the suggested solution didn't work for you