I'm newbie with Fluentd, after made a deploy and tried configure the delivery logs for Elasticsearch I got the following error in Fluentd Pod:
The client is unable to verify that the server is Elasticsearch. Some functionality may not be compatible if the server is running an unsupported product.
2024-04-04 22:20:40 +0000 [warn]: #0 failed to flush the buffer. retry_times=7 next_retry_time=2024-04-04 22:22:39 +0000 chunk="6154cb4ec9ce286ed45aed44ccbd48df" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-master\", :port=>9200, :scheme=>\"http\", :user=>\"elastic\", :password=>\"obfuscated\", :path=>\"\"}): no address for elasticsearch-master (Resolv::ResolvError)"
I've been checked the user and password from Elasticsearch and them are configured corrects at K8s Secret.
At first moments sounds like that my custom configs not override the values onto Fluentd: could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-master\",, I've been changed this value in my deployment to value from ECK Service in K8s.
Fluentd values.yaml
nameOverride: ""
fullnameOverride: ""
# DaemonSet, Deployment or StatefulSet
kind: "DaemonSet"
# azureblob, cloudwatch, elasticsearch7, elasticsearch8, gcs, graylog , kafka, kafka2, kinesis, opensearch
variant: elasticsearch8
# # Only applicable for Deployment or StatefulSet
# replicaCount: 1
image:
repository: "fluent/fluentd-kubernetes-daemonset"
pullPolicy: "IfNotPresent"
tag: ""
## Optional array of imagePullSecrets containing private registry credentials
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
serviceAccount:
create: true
annotations: {}
name: null
rbac:
create: true
# from Kubernetes 1.25, PSP is deprecated
# See: https://kubernetes.io/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes
# We automatically disable PSP if Kubernetes version is 1.25 or higher
podSecurityPolicy:
enabled: true
annotations: {}
## Security Context policies for controller pods
## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for
## notes on enabling and using sysctls
##
podSecurityContext: {}
securityContext: {}
# Configure the livecycle
# Ref: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
lifecycle: {}
# Configure the livenessProbe
# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
livenessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
# Configure the readinessProbe
# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
readinessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources: {}
## only available if kind is Deployment
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
## see https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics
customRules: []
nodeSelector: {}
## Node tolerations for server scheduling to nodes with taints
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
##
tolerations: []
## Affinity and anti-affinity
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Annotations to be added to fluentd DaemonSet/Deployment
##
annotations: {}
## Labels to be added to fluentd DaemonSet/Deployment
##
labels: {}
## Annotations to be added to fluentd pods
##
podAnnotations: {}
## Labels to be added to fluentd pods
##
podLabels: {}
## How long (in seconds) a pods needs to be stable before progressing the deployment
##
minReadySeconds:
## How long (in seconds) a pod may take to exit (useful with lifecycle hooks to ensure lb deregistration is done)
##
terminationGracePeriodSeconds:
## Deployment strategy / DaemonSet updateStrategy
##
updateStrategy: {}
## Additional environment variables to set for fluentd pods
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "eck-elasticsearch-es-http.observability"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "https"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_ELASTICSEARCH_SSL_VERIFY
value: "false"
- name: FLUENT_ELASTICSEARCH_USER
valueFrom:
secretKeyRef:
name: fluentd-es-oauth
key: user
- name: FLUENT_ELASTICSEARCH_PWD
valueFrom:
secretKeyRef:
name: fluentd-es-oauth
key: password
envFrom: []
initContainers: []
# Name of the configMap containing a custom fluentd.conf configuration file to use instead of the default.
#mainConfigMapNameOverride: "fluentd-es-config"
## Name of the configMap containing files to be placed under /etc/fluent/config.d/
## NOTE: This will replace ALL default files in the aforementioned path!
# extraFilesConfigMapNameOverride: ""
mountVarLogDirectory: true
mountDockerContainersDirectory: true
volumes:
- name: config-volume
configMap:
name: fluentd-es-config
volumeMounts: []
## Only available if kind is StatefulSet
## Fluentd persistence
##
persistence:
enabled: false
storageClass: "gp2"
accessMode: ReadWriteOnce
size: 10Gi
## Fluentd service
##
service:
enabled: true
type: "ClusterIP"
annotations: {}
ports: []
## Prometheus Monitoring
##
metrics:
serviceMonitor:
enabled: true
additionalLabels:
release: prometheus-operator
namespace: "observability"
namespaceSelector: {}
## metric relabel configs to apply to samples before ingestion.
##
metricRelabelings: []
## relabel configs to apply to samples after ingestion.
##
relabelings: []
prometheusRule:
enabled: true
additionalLabels: {}
namespace: ""
rules:
- alert: FluentdDown
expr: up{job="fluentd"} == 0
for: 5m
labels:
context: fluentd
severity: warning
annotations:
summary: "Fluentd Down"
description: "{{ $labels.pod }} on {{ $labels.nodename }} is down"
- alert: FluentdScrapeMissing
expr: absent(up{job="fluentd"} == 1)
for: 15m
labels:
context: fluentd
severity: warning
annotations:
summary: "Fluentd Scrape Missing"
description: "Fluentd instance has disappeared from Prometheus target discovery"
## Grafana Monitoring Dashboard
##
dashboards:
enabled: "true"
namespace: ""
labels:
grafana_dashboard: '"1"'
## Fluentd list of plugins to install
##
plugins: []
## Add fluentd config files from K8s configMaps
##
configMapConfigs: []
ingress:
enabled: false
annotations: {}
hosts:
- port: 9880
tls: []
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-es-config
namespace: "observability"
data:
fluentd.conf: |
# Prometheus Exporter Plugin
# input plugin that exports metrics
<source>
@type prometheus
port 24231
</source>
# input plugin that collects metrics from MonitorAgent
<source>
@type prometheus_monitor
<labels>
host ${hostname}
</labels>
</source>
# input plugin that collects metrics for output plugin
<source>
@type prometheus_output_monitor
<labels>
host ${hostname}
</labels>
</source>
# Ignore fluentd own events
<match fluent.**>
@type null
</match>
# TCP input to receive logs from the forwarders
<source>
@type forward
bind 0.0.0.0
port 24224
</source>
# HTTP input for the liveness and readiness probes
<source>
@type http
bind 0.0.0.0
port 9880
</source>
# Throw the healthcheck to the standard output instead of forwarding it
<match fluentd.healthcheck>
@type stdout
</match>
# Send the logs to the standard output
<match **>
@type elasticsearch
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
password "#{ENV['FLUENT_ELASTICSEARCH_PWD']}"
ssl_verify false
logstash_format true
<buffer>
@type file
path /opt/bitnami/fluentd/logs/buffers/logs.buffer
flush_thread_count 2
flush_interval 5s
</buffer>
</match>
Guys,
TL & DR
I'm newbie with Fluentd, after made a deploy and tried configure the delivery logs for Elasticsearch I got the following error in Fluentd Pod:
My Setup
2.12
| Elastic:8.13.1
0.5.2
user
andpassword
from Elasticsearch and them are configured corrects at K8s Secret.At first moments sounds like that my custom configs not override the values onto Fluentd:
could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-master\",
, I've been changed this value in my deployment to value from ECK Service in K8s.Fluentd values.yaml
ConfigMap
Is any ideia where is my mistake?