elastic / helm-charts

You know, for Kubernetes
Apache License 2.0
1.88k stars 1.93k forks source link

Readiness probe failed: Error: Got HTTP code 503 but expected a 200 #780

Closed melissajenner22 closed 3 years ago

melissajenner22 commented 4 years ago

Chart version: 7.7.1 Kubernetes version: 1.16 Kubernetes provider: E.g. GKE (Google Kubernetes Engine) EKS Helm Version: 2.16.10

helm get release output

e.g. helm get elasticsearch (replace elasticsearch with the name of your helm release)

Be careful to obfuscate every secrets (credentials, token, public IP, ...) that could be visible in the output before copy-pasting.

If you find some secrets in plain text in helm get release output you should use Kubernetes Secrets to managed them is a secure way (see Security Example).

Output of helm get release ``` helm get kibana REVISION: 1 RELEASED: Thu Aug 13 20:14:45 2020 CHART: kibana-7.7.1 USER-SUPPLIED VALUES: {} COMPUTED VALUES: affinity: {} elasticsearchHosts: http://elasticsearch-master:9200 elasticsearchURL: "" envFrom: [] extraContainers: "" extraEnvs: - name: NODE_OPTIONS value: --max-old-space-size=1800 extraInitContainers: "" fullnameOverride: "" healthCheckPath: /app/kibana httpPort: 5601 image: docker.elastic.co/kibana/kibana imagePullPolicy: IfNotPresent imagePullSecrets: [] imageTag: 7.7.1 ingress: annotations: {} enabled: false hosts: - chart-example.local path: / tls: [] kibanaConfig: {} labels: {} lifecycle: {} nameOverride: "" nodeSelector: {} podAnnotations: {} podSecurityContext: fsGroup: 1000 priorityClassName: "" protocol: http readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 3 timeoutSeconds: 5 replicas: 1 resources: limits: cpu: 800m memory: 1Gi requests: cpu: 800m memory: 1Gi secretMounts: [] securityContext: capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1000 serverHost: 0.0.0.0 service: annotations: {} labels: {} loadBalancerSourceRanges: [] nodePort: "" port: 5601 type: ClusterIP serviceAccount: "" tolerations: [] updateStrategy: type: Recreate HOOKS: MANIFEST: --- # Source: kibana/templates/service.yaml apiVersion: v1 kind: Service metadata: name: kibana-kibana labels: app: kibana release: "kibana" heritage: Tiller spec: type: ClusterIP ports: - port: 5601 protocol: TCP name: http targetPort: 5601 selector: app: kibana release: "kibana" --- # Source: kibana/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kibana-kibana labels: app: kibana release: "kibana" heritage: Tiller spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: kibana release: "kibana" template: metadata: labels: app: kibana release: "kibana" annotations: spec: securityContext: fsGroup: 1000 volumes: containers: - name: kibana securityContext: capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1000 image: "docker.elastic.co/kibana/kibana:7.7.1" imagePullPolicy: "IfNotPresent" env: - name: ELASTICSEARCH_HOSTS value: "http://elasticsearch-master:9200" - name: SERVER_HOST value: "0.0.0.0" - name: NODE_OPTIONS value: --max-old-space-size=1800 readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 3 timeoutSeconds: 5 exec: command: - sh - -c - | #!/usr/bin/env bash -e http () { local path="${1}" set -- -XGET -s --fail -L if [ -n "${ELASTICSEARCH_USERNAME}" ] && [ -n "${ELASTICSEARCH_PASSWORD}" ]; then set -- "$@" -u "${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD}" fi STATUS=$(curl --output /dev/null --write-out "%{http_code}" -k "$@" "http://localhost:5601${path}") if [[ "${STATUS}" -eq 200 ]]; then exit 0 fi echo "Error: Got HTTP code ${STATUS} but expected a 200" exit 1 } http "/app/kibana" ports: - containerPort: 5601 resources: limits: cpu: 800m memory: 1Gi requests: cpu: 800m memory: 1Gi volumeMounts: ```

Describe the bug:

kubectl describe pod kibana-kibana-7458222222-2222

  Warning  Unhealthy  26m (x5 over 27m)  kubelet, ip-101-10-161-126.us-west-2.compute.internal  Readiness probe failed: Error: Got HTTP code 000 but expected a 200
  Warning  Unhealthy  26m                kubelet, ip-101-10-161-126.us-west-2.compute.internal  Readiness probe failed: Error: Got HTTP code 503 but expected a 200

Steps to reproduce:

1. helm install --name elasticsearch ./elasticsearch --namespace elk 2. helm install --name kibana ./kibana --namespace elk 3.

Expected behavior:

Provide logs and/or server output (if relevant):

Be careful to obfuscate every secrets (credentials, token, public IP, ...) that could be visible in the output before copy-pasting

Any additional context:

kudrew commented 4 years ago

I have a similar issue on GKE: exposing the service for Kibana (Nodeport/ LoadBalancer) via Ingress because the health check keeps failing on v.7.8.1 (presumably hitting "/" receives error 302, but "/app/kibana" gives 200. The path that the health check looks for is "/").


{"type":"log","@timestamp":"2020-08-17T19:27:00Z","tags":["listening","info"],"pid":7,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","@timestamp":"2020-08-17T19:27:01Z","tags":["info","http","server","Kibana"],"pid":7,"message":"http server running at http://0.0.0.0:5601"}
...
...
...
{"type":"response","@timestamp":"2020-08-17T20:27:59Z","tags":[],"pid":7,"method":"get","statusCode":302,"req":{"url":"/","method":"get","headers":{"host":"10.150.12.81","user-
agent":"GoogleHC/1.0","connection":"Keep-alive"},"remoteAddress":"10.150.12.81","userAgent":"10.150.12.81"},"res":{"statusCode":302,"responseTime":20,"contentLength":9},"messag
e":"GET / 302 20ms - 9.0B"}
{"type":"response","@timestamp":"2020-08-17T20:27:59Z","tags":[],"pid":7,"method":"get","statusCode":200,"req":{"url":"/app/kibana","method":"get","headers":{"host":"10.150.12.
79","user-agent":"GoogleHC/1.0","connection":"Keep-alive"},"remoteAddress":"10.150.12.79","userAgent":"10.150.12.79"},"res":{"statusCode":200,"responseTime":56,"contentLength":
9},"message":"GET /app/kibana 200 56ms - 9.0B"}
melissajenner22 commented 4 years ago

In helm-charts/elasticsearch/values.yaml, at line 236, I changed / to /app/kibana (Note: helm version 2.16.10. Kuberketes version 1.16 (eks))

231 ingress:
232   enabled: false
233   annotations: {}
234     # kubernetes.io/ingress.class: nginx
235     # kubernetes.io/tls-acme: "true"
236   path: /app/kibana
237   hosts:
238     - chart-example.local
239   tls: []
240   #  - secretName: chart-example-tls
241   #    hosts:
242   #      - chart-example.local

$ helm install --name elasticsearch ./elasticsearch --namespace elk

$ kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
elasticsearch-master-0   1/1     Running   0          7m46s
elasticsearch-master-1   1/1     Running   0          7m46s
elasticsearch-master-2   1/1     Running   0          7m46s

$ kubectl describe pod elasticsearch-master-0

  Warning  Unhealthy               7m30s (x2 over 7m40s)  kubelet, ip-10-117-56-142.us-west-2.compute.internal  Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )

$ git diff
diff --git a/elasticsearch/values.yaml b/elasticsearch/values.yaml
index 284ea67..2e4afb7 100755
--- a/elasticsearch/values.yaml
+++ b/elasticsearch/values.yaml
@@ -233,7 +233,7 @@ ingress:
   annotations: {}
     # kubernetes.io/ingress.class: nginx
     # kubernetes.io/tls-acme: "true"
-  path: /
+  path: /app/kibana
   hosts:
     - chart-example.local
   tls: []

Which file and line did you mean to change / to /app/kibana?

kubectl logs elasticsearch-master-0

{"type": "server", "timestamp": "2020-08-17T21:45:15,043Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "version[7.8.1], pid[6], build[default/docker/b5ca9c58fb664ca8bf9e4057fc229b3396bf3a89/2020-07-21T16:40:44.668009Z], OS[Linux/4.14.186-146.268.amzn2.x86_64/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/14.0.1/14.0.1+7]" }
{"type": "server", "timestamp": "2020-08-17T21:45:15,047Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "JVM home [/usr/share/elasticsearch/jdk]" }
{"type": "server", "timestamp": "2020-08-17T21:45:15,047Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms1g, -Xmx1g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/elasticsearch-1547126154895526433, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xmx1g, -Xms1g, -XX:MaxDirectMemorySize=536870912, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,449Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [aggs-matrix-stats]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,453Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [analysis-common]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,454Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [constant-keyword]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,454Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [flattened]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,454Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [frozen-indices]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,455Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [ingest-common]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,455Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [ingest-geoip]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,456Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [ingest-user-agent]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,456Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [kibana]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,457Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [lang-expression]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,458Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [lang-mustache]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,458Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [lang-painless]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,458Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [mapper-extras]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,459Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [parent-join]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,459Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [percolator]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,459Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [rank-eval]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,460Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [reindex]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,461Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [repository-url]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,462Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [search-business-rules]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,462Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [searchable-snapshots]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,463Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [spatial]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,463Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [tasks]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,464Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [transform]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,465Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [transport-netty4]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,517Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [vectors]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,517Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-analytics]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,518Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-async-search]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,518Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-autoscaling]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,518Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-ccr]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,518Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-core]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,519Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-deprecation]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,519Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-enrich]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,519Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-eql]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,519Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-graph]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,520Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-identity-provider]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,520Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-ilm]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,520Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-logstash]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,520Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-ml]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,521Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-monitoring]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,522Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-ql]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,522Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-rollup]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,522Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-security]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,523Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-sql]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,523Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-voting-only-node]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,524Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-watcher]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,524Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "no plugins loaded" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,725Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/nvme1n1)]], net usable_space [29.3gb], net total_space [29.4gb], types [ext4]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,725Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "heap size [1gb], compressed ordinary object pointers [true]" }
{"type": "server", "timestamp": "2020-08-17T21:45:20,922Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "node name [elasticsearch-master-0], node ID [wXizytAARfWaqfrJOIY7uw], cluster name [elasticsearch]" }
{"type": "server", "timestamp": "2020-08-17T21:45:30,673Z", "level": "INFO", "component": "o.e.x.s.a.s.FileRolesStore", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]" }
{"type": "server", "timestamp": "2020-08-17T21:45:32,231Z", "level": "INFO", "component": "o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "[controller/167] [Main.cc@115] controller (64 bit): Version 7.8.1 (Build d0d3f60f03220d) Copyright (c) 2020 Elasticsearch BV" }
{"type": "server", "timestamp": "2020-08-17T21:45:33,719Z", "level": "DEBUG", "component": "o.e.a.ActionModule", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "Using REST wrapper from plugin org.elasticsearch.xpack.security.Security" }
{"type": "server", "timestamp": "2020-08-17T21:45:33,929Z", "level": "INFO", "component": "o.e.d.DiscoveryModule", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "using discovery type [zen] and seed hosts providers [settings]" }
{"type": "server", "timestamp": "2020-08-17T21:45:36,268Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "initialized" }
{"type": "server", "timestamp": "2020-08-17T21:45:36,317Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "starting ..." }
{"type": "server", "timestamp": "2020-08-17T21:45:36,661Z", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "publish_address {101.17.60.223:9300}, bound_addresses {0.0.0.0:9300}" }
{"type": "server", "timestamp": "2020-08-17T21:45:37,273Z", "level": "INFO", "component": "o.e.b.BootstrapChecks", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "bound or publishing to a non-loopback address, enforcing bootstrap checks" }
{"type": "server", "timestamp": "2020-08-17T21:45:37,335Z", "level": "INFO", "component": "o.e.c.c.Coordinator", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "cluster UUID [ladQZP9eTVOdHbKLtagGZw]" }
{"type": "server", "timestamp": "2020-08-17T21:45:38,167Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "master node changed {previous [], current [{elasticsearch-master-1}{SIdjHeatT7e3GfM7SNA2mg}{HD17VDMJR7mBO8S8ZHAHKQ}{101.17.97.191}{101.17.97.191:9300}{dilmrt}{ml.machine_memory=2147483648, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}]}, added {{elasticsearch-master-2}{9T1V5hqXT6ubRtvhAazbwg}{lrwWM3dRRiutxlLBaWledg}{101.17.11.44}{101.17.11.44:9300}{dilmrt}{ml.machine_memory=2147483648, ml.max_open_jobs=20, xpack.installed=true, transform.node=true},{elasticsearch-master-1}{SIdjHeatT7e3GfM7SNA2mg}{HD17VDMJR7mBO8S8ZHAHKQ}{101.17.97.191}{101.17.97.191:9300}{dilmrt}{ml.machine_memory=2147483648, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}}, term: 5, version: 43, reason: ApplyCommitRequest{term=5, version=43, sourceNode={elasticsearch-master-1}{SIdjHeatT7e3GfM7SNA2mg}{HD17VDMJR7mBO8S8ZHAHKQ}{101.17.97.191}{101.17.97.191:9300}{dilmrt}{ml.machine_memory=2147483648, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}}" }
{"type": "server", "timestamp": "2020-08-17T21:45:38,632Z", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "license [c7d92589-5c9d-4440-8021-ecfea0058f0f] mode [basic] - valid" }
{"type": "server", "timestamp": "2020-08-17T21:45:38,633Z", "level": "INFO", "component": "o.e.x.s.s.SecurityStatusChangeListener", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "Active license is now [BASIC]; Security is disabled" }
{"type": "server", "timestamp": "2020-08-17T21:45:38,666Z", "level": "INFO", "component": "o.e.h.AbstractHttpServerTransport", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "publish_address {101.17.60.223:9200}, bound_addresses {0.0.0.0:9200}", "cluster.uuid": "ladQZP9eTVOdHbKLtagGZw", "node.id": "wXizytAARfWaqfrJOIY7uw"  }
{"type": "server", "timestamp": "2020-08-17T21:45:38,719Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "started", "cluster.uuid": "ladQZP9eTVOdHbKLtagGZw", "node.id": "wXizytAARfWaqfrJOIY7uw"  }
{"type": "server", "timestamp": "2020-08-17T21:45:38,741Z", "level": "INFO", "component": "o.e.x.s.a.TokenService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "refresh keys", "cluster.uuid": "ladQZP9eTVOdHbKLtagGZw", "node.id": "wXizytAARfWaqfrJOIY7uw"  }
{"type": "server", "timestamp": "2020-08-17T21:45:39,163Z", "level": "INFO", "component": "o.e.x.s.a.TokenService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "refreshed keys", "cluster.uuid": "ladQZP9eTVOdHbKLtagGZw", "node.id": "wXizytAARfWaqfrJOIY7uw"  }
Ankitchandre commented 4 years ago

you can change the value of healthCheckPath: from "/app/kibana" to "/api/status" in values.yaml . This should pass the readiness probe error .

abdennour commented 3 years ago

But if i have to change healthCheckPath to a new value , why is not the default value ? Guys, please check best practices implemented in som helm git repos ( e.g: bitnami,.. etc) and mimic them. Overall, good job! thanks!

botelastic[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

botelastic[bot] commented 3 years ago

This issue has been automatically closed because it has not had recent activity since being marked as stale.

shmelkin commented 2 years ago

This is apparently still not working properly or may have different reasons.

kubectl version

Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:19:12Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

kubeadm version

kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:24:08Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

kibana values.yaml

elasticsearchHosts: "https://redacted"

extraEnvs:
  - name: "NODE_OPTIONS"
    value: "--max-old-space-size=1800"
  - name: 'ELASTICSEARCH_USERNAME'
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: username
  - name: 'ELASTICSEARCH_PASSWORD'
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: password
  - name: 'KIBANA_ENCRYPTION_KEY'
    valueFrom:
      secretKeyRef:
        name: kibana
        key: encryptionkey

secretMounts:
  - name: elastic-certificates
    secretName: elastic-certificates
    path: /usr/share/kibana/config/certs-gen/

kibanaConfig:
  kibana.yml: |
    server.ssl:
      enabled: true
      key: /usr/share/kibana/config/certs-gen/privkey2.pem
      certificate: /usr/share/kibana/config/certs-gen/fullchain2.pem
    xpack.reporting.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
    xpack.security.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
    xpack.encryptedSavedObjects.encryptionKey: ${KIBANA_ENCRYPTION_KEY}

protocol: https

service:
  type: NodePort
  loadBalancerIP: ""
  port: 5601
  nodePort: 30002
  labels: {}
  annotations: {}
  loadBalancerSourceRanges: []
  httpPortName: http

healthCheckPath: /api/status # also checked /app/kibana and default

kubectl get pv,pvc,nodes,pods,svc

NAME                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                 STORAGECLASS   REASON   AGE
persistentvolume/elk-data   30Gi       RWO            Retain           Bound    default/elasticsearch-master-elasticsearch-master-0                           51m

NAME                                                                STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/elasticsearch-master-elasticsearch-master-0   Bound    elk-data   30Gi       RWO                           51m

NAME               STATUS   ROLES                  AGE   VERSION
node/disposable1   Ready    control-plane,master   54m   v1.23.3

NAME                                    READY   STATUS    RESTARTS   AGE
pod/elasticsearch-master-0              1/1     Running   0          34m
pod/kibana-kibana-79544d8d54-x4smn      0/1     Running   0          67s
pod/nginx-deployment-55784d5d88-mc4tt   1/1     Running   0          51m

NAME                                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
service/elasticsearch-master            NodePort    10.97.220.72    <none>        9200:30001/TCP,9300:30786/TCP   34m
service/elasticsearch-master-headless   ClusterIP   None            <none>        9200/TCP,9300/TCP               34m
service/kibana-kibana                   NodePort    10.96.230.182   <none>        5601:30002/TCP                  67s
service/kubernetes                      ClusterIP   10.96.0.1       <none>        443/TCP                         54m
service/nginx-service                   NodePort    10.108.60.203   <none>        80:30000/TCP                    51m

kubectl describe pod/kibana-kibana-79544d8d54-x4smn

Name:         kibana-kibana-79544d8d54-x4smn
Namespace:    default
Priority:     0
Node:         disposable1/redacted
Start Time:   Thu, 17 Feb 2022 10:46:50 +0100
Labels:       app=kibana
              pod-template-hash=79544d8d54
              release=kibana
Annotations:  cni.projectcalico.org/containerID: f5011b7ee549f8b4983e09735bae0fad6584c662e14b558df5cd1bc6ce064839
              cni.projectcalico.org/podIP: 192.168.47.16/32
              cni.projectcalico.org/podIPs: 192.168.47.16/32
              configchecksum: 7ce114df53c5a41b1c4386587d8c9a3b5aebf96f5137051574760a6a72d488e
Status:       Running
IP:           192.168.47.16
IPs:
  IP:           192.168.47.16
Controlled By:  ReplicaSet/kibana-kibana-79544d8d54
Containers:
  kibana:
    Container ID:   containerd://5130c671a35f1f8fbcd1ccd06a6d8a4ae9c047c29e42ee883b246940118b1179
    Image:          docker.elastic.co/kibana/kibana:7.16.3
    Image ID:       docker.elastic.co/kibana/kibana@sha256:6c9867bd8e91737db8fa73ca6f522b2836ed1300bcc31dee96e62dc1e6413191
    Port:           5601/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 17 Feb 2022 10:46:51 +0100
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  2Gi
    Requests:
      cpu:      1
      memory:   2Gi
    Readiness:  exec [sh -c #!/usr/bin/env bash -e

# Disable nss cache to avoid filling dentry cache when calling curl
# This is required with Kibana Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no

http () {
    local path="${1}"
    set -- -XGET -s --fail -L

    if [ -n "${ELASTICSEARCH_USERNAME}" ] && [ -n "${ELASTICSEARCH_PASSWORD}" ]; then
      set -- "$@" -u "${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD}"
    fi

    STATUS=$(curl --output /dev/null --write-out "%{http_code}" -k "$@" "https://localhost:5601${path}")
    if [[ "${STATUS}" -eq 200 ]]; then
      exit 0
    fi

    echo "Error: Got HTTP code ${STATUS} but expected a 200"
    exit 1
}

http "/api/status" # also checked /app/kibana and default
] delay=10s timeout=5s period=10s #success=3 #failure=3
    Environment:
      ELASTICSEARCH_HOSTS:     https://redacted:30001
      SERVER_HOST:             0.0.0.0
      NODE_OPTIONS:            --max-old-space-size=1800
      ELASTICSEARCH_USERNAME:  <set to the key 'username' in secret 'elastic-credentials'>  Optional: false
      ELASTICSEARCH_PASSWORD:  <set to the key 'password' in secret 'elastic-credentials'>  Optional: false
      KIBANA_ENCRYPTION_KEY:   <set to the key 'encryptionkey' in secret 'kibana'>          Optional: false
    Mounts:
      /usr/share/kibana/config/certs-gen/ from elastic-certificates (rw)
      /usr/share/kibana/config/kibana.yml from kibanaconfig (rw,path="kibana.yml")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6p92j (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  elastic-certificates:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  elastic-certificates
    Optional:    false
  kibanaconfig:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kibana-kibana-config
    Optional:  false
  kube-api-access-6p92j:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  112s               default-scheduler  Successfully assigned default/kibana-kibana-79544d8d54-x4smn to disposable1
  Normal   Pulled     111s               kubelet            Container image "docker.elastic.co/kibana/kibana:7.16.3" already present on machine
  Normal   Created    111s               kubelet            Created container kibana
  Normal   Started    111s               kubelet            Started container kibana
  Warning  Unhealthy  2s (x11 over 92s)  kubelet            Readiness probe failed: Error: Got HTTP code 503 but expected a 200

kubectl describe pod/elasticsearch-master-0

Name:         elasticsearch-master-0
Namespace:    default
Priority:     0
Node:         disposable1/redacted
Start Time:   Thu, 17 Feb 2022 10:13:08 +0100
Labels:       app=elasticsearch-master
              chart=elasticsearch
              controller-revision-hash=elasticsearch-master-75677f4c46
              release=elasticsearch
              statefulset.kubernetes.io/pod-name=elasticsearch-master-0
Annotations:  cni.projectcalico.org/containerID: ab8958d4440b27eb0948c90b3697fbb95f20faf8a3bc20969ce988f5b9e3408c
              cni.projectcalico.org/podIP: 192.168.47.13/32
              cni.projectcalico.org/podIPs: 192.168.47.13/32
              configchecksum: 490c089a5be33d334507cb4fe55645f1b2bbae7a8167caf4a57710ff4a85fc2
Status:       Running
IP:           192.168.47.13
IPs:
  IP:           192.168.47.13
Controlled By:  StatefulSet/elasticsearch-master
Init Containers:
  configure-sysctl:
    Container ID:  containerd://04b549844c8198b1ee87504fbfae2f33725320af56902a640652198248dcc5b8
    Image:         docker.elastic.co/elasticsearch/elasticsearch:7.16.3
    Image ID:      docker.elastic.co/elasticsearch/elasticsearch@sha256:0efc3a054ae97ad00cccc33b9ef79ec022970b2a9949893db4ef199edcdca2ce
    Port:          <none>
    Host Port:     <none>
    Command:
      sysctl
      -w
      vm.max_map_count=262144
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 17 Feb 2022 10:13:09 +0100
      Finished:     Thu, 17 Feb 2022 10:13:09 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x5qlm (ro)
Containers:
  elasticsearch:
    Container ID:   containerd://d13da1566f45f7806a0c04c14c5ed7548a8550aa491967124d03b4bc4e61d8b0
    Image:          docker.elastic.co/elasticsearch/elasticsearch:7.16.3
    Image ID:       docker.elastic.co/elasticsearch/elasticsearch@sha256:0efc3a054ae97ad00cccc33b9ef79ec022970b2a9949893db4ef199edcdca2ce
    Ports:          9200/TCP, 9300/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Thu, 17 Feb 2022 10:13:10 +0100
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  2Gi
    Requests:
      cpu:      1
      memory:   2Gi
    Readiness:  exec [bash -c set -e
# If the node is starting up wait for the cluster to be ready (request params: "wait_for_status=yellow&timeout=1s" )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file

# Disable nss cache to avoid filling dentry cache when calling curl
# This is required with Elasticsearch Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no

http () {
  local path="${1}"
  local args="${2}"
  set -- -XGET -s

  if [ "$args" != "" ]; then
    set -- "$@" $args
  fi

  if [ -n "${ELASTIC_PASSWORD}" ]; then
    set -- "$@" -u "elastic:${ELASTIC_PASSWORD}"
  fi

  curl --output /dev/null -k "$@" "https://127.0.0.1:9200${path}"
}

if [ -f "${START_FILE}" ]; then
  echo 'Elasticsearch is already running, lets check the node is healthy'
  HTTP_CODE=$(http "/" "-w %{http_code}")
  RC=$?
  if [[ ${RC} -ne 0 ]]; then
    echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} https://127.0.0.1:9200/ failed with RC ${RC}"
    exit ${RC}
  fi
  # ready if HTTP code 200, 503 is tolerable if ES version is 6.x
  if [[ ${HTTP_CODE} == "200" ]]; then
    exit 0
  elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then
    exit 0
  else
    echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} https://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}"
    exit 1
  fi

else
  echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=yellow&timeout=1s" )'
  if http "/_cluster/health?wait_for_status=yellow&timeout=1s" "--fail" ; then
    touch ${START_FILE}
    exit 0
  else
    echo 'Cluster is not yet ready (request params: "wait_for_status=yellow&timeout=1s" )'
    exit 1
  fi
fi
] delay=10s timeout=5s period=10s #success=3 #failure=3
    Environment:
      node.name:                             elasticsearch-master-0 (v1:metadata.name)
      cluster.initial_master_nodes:          elasticsearch-master-0,
      discovery.seed_hosts:                  elasticsearch-master-headless
      cluster.name:                          elasticsearch
      network.host:                          0.0.0.0
      cluster.deprecation_indexing.enabled:  false
      node.data:                             true
      node.ingest:                           true
      node.master:                           true
      node.ml:                               true
      node.remote_cluster_client:            true
      ELASTIC_PASSWORD:                      <set to the key 'password' in secret 'elastic-credentials'>  Optional: false
      ELASTIC_USERNAME:                      <set to the key 'username' in secret 'elastic-credentials'>  Optional: false
    Mounts:
      /usr/share/elasticsearch/config/certs-gen/ from elastic-certificates (rw)
      /usr/share/elasticsearch/config/elasticsearch.yml from esconfig (rw,path="elasticsearch.yml")
      /usr/share/elasticsearch/data from elasticsearch-master (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x5qlm (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  elasticsearch-master:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  elasticsearch-master-elasticsearch-master-0
    ReadOnly:   false
  elastic-certificates:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  elastic-certificates
    Optional:    false
  esconfig:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      elasticsearch-master-config
    Optional:  false
  kube-api-access-x5qlm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  43m                default-scheduler  Successfully assigned default/elasticsearch-master-0 to disposable1
  Normal   Pulled     43m                kubelet            Container image "docker.elastic.co/elasticsearch/elasticsearch:7.16.3" already present on machine
  Normal   Created    43m                kubelet            Created container configure-sysctl
  Normal   Started    43m                kubelet            Started container configure-sysctl
  Normal   Pulled     43m                kubelet            Container image "docker.elastic.co/elasticsearch/elasticsearch:7.16.3" already present on machine
  Normal   Created    43m                kubelet            Created container elasticsearch
  Normal   Started    43m                kubelet            Started container elasticsearch
  Warning  Unhealthy  43m (x2 over 43m)  kubelet            Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=yellow&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=yellow&timeout=1s" )
lkamal commented 2 years ago

But if i have to change healthCheckPath to a new value , why is not the default value ? Guys, please check best practices implemented in som helm git repos ( e.g: bitnami,.. etc) and mimic them. Overall, good job! thanks!

Not resolved.

Even on v7.17.3 this is not fixed (tried both values suggested "/api/status" and "/app/kibana").

spoonwep commented 2 years ago

Same issue, does anyone find a solution?

grigoryevandrey commented 1 year ago

Bump, same issue

ranferimeza commented 1 year ago

Bump, same issue

grigoryevandrey commented 1 year ago

Bump, same issue

Try to check kibana logs and see if you get error that i described in this issue

ranferimeza commented 1 year ago

@grigoryevandrey , thank you for the suggestion, but this is definitely a different thing. No similar error found in my kibana logs. Thanks again!