When I update fluent-forwarder-cm ConfigMap and provide elasticsearch host like below
{
"fluentd-inputs.conf": "# HTTP input for the liveness and readiness probes
<source>
@type http
port 9880
</source>
# Get the logs from the containers running in the node
<source>
@type tail
path /var/log/containers/*.log
# exclude Fluentd logs
exclude_path /var/log/containers/*fluentd*.log
pos_file /opt/bitnami/fluentd/logs/buffers/fluentd-docker.pos
tag kubernetes.*
read_from_head true
<parse>
@type json
</parse>
</source>
# enrich with kubernetes metadata
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
",
"fluentd-output.conf": "# Throw the healthcheck to the standard output instead of forwarding it
<match fluentd.healthcheck>
@type stdout
</match>
# Forward all logs to the aggregators
<match **>
@type elasticsearch
include_tag_key true
host \"elasticsearch-master.logging.svc.cluster.local\"
port \"9200\"
logstash_format true
<buffer>
@type file
path /opt/bitnami/fluentd/logs/buffers/logs.buffer
flush_thread_count 2
flush_interval 5s
</buffer>
</match>
",
"fluentd.conf": "# Ignore fluentd own events
<match fluent.**>
@type null
</match>
@include fluentd-inputs.conf
@include fluentd-output.conf
",
"metrics.conf": "# Prometheus Exporter Plugin
# input plugin that exports metrics
<source>
@type prometheus
port 24231
</source>
# input plugin that collects metrics from MonitorAgent
<source>
@type prometheus_monitor
<labels>
host #{hostname}
</labels>
</source>
# input plugin that collects metrics for output plugin
<source>
@type prometheus_output_monitor
<labels>
host #{hostname}
</labels>
</source>
# input plugin that collects metrics for in_tail plugin
<source>
@type prometheus_tail_monitor
<labels>
host #{hostname}
</labels>
</source>
"
}
I get errors.
1st Error,
kubectl describe pod kibana-kibana-7f47d4b8c5-7r8x7 -n logging
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 24m default-scheduler Successfully assigned logging/kibana-kibana-7f47d4b8c5-7r8x7 to ip-172-20-32-143.ap-south-1.compute.internal
Normal Pulled 24m kubelet Container image "docker.elastic.co/kibana/kibana:7.12.0" already present on machine
Normal Created 24m kubelet Created container kibana
Normal Started 24m kubelet Started container kibana
Warning Unhealthy 22m kubelet Readiness probe failed: Error: Got HTTP code 000 but expected a 200
Warning Unhealthy 4m28s (x25 over 24m) kubelet Readiness probe failed: Error: Got HTTP code 503 but expected a 200
2nd Error,
GET https://logs.example.in/
503 Service Temporarily Unavailable
I am trying to setup EFK stack in a aws cluster using
helm
.These are the steps I followed.
logging
Installed elastic search
values.yml
At this point everything works.
I can go to
logs.example.in
to view the kibana dashboard. I can also exec into any pod and run,...and it gives response.
When I update
fluent-forwarder-cm
ConfigMap and provide elasticsearch host like belowI get errors.
1st Error,
2nd Error,
3rd Error, Doing
from inside any pod give
timedout error