elastic / cloud-on-k8s

Elastic Cloud on Kubernetes
Other
45 stars 706 forks source link

Logstash recipe fails in MicroK8s #4712

Closed RobertDiebels closed 2 years ago

RobertDiebels commented 3 years ago

What did you do?

  1. I installed ECK using helm3.
  2. Applied the logstash recipe
  3. Both the Kibana and filebeat pods failed to run.

What did you expect to see? I expected that the recipe would have started without an issue.

What did you see instead? Under which circumstances?

  1. Filebeat had init problems. ( resolved these by using local log files instead of those provided by init-container )
  2. Kibana's readinessProbe failed.

Environment

See logstash recipe.

thbkrkr commented 3 years ago

Did you enable the dns and storage add-ons?

This works fine for me on Ubuntu 20.04.2 LTS:

> sudo snap install microk8s --classic
> microk8s enable dns
> microk8s enable storage
> microk8s enable helm3
> microk8s helm3 repo add elastic https://helm.elastic.co
> microk8s helm3 repo update
> microk8s helm3 install elastic-operator elastic/eck-operator -n elastic-system --create-namespace
> microk8s kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/master/config/recipes/logstash/logstash.yaml
> microk8s kubectl get elastic,po
NAME                                                       HEALTH   NODES   VERSION   PHASE   AGE
elasticsearch.elasticsearch.k8s.elastic.co/elasticsearch   green    3       7.13.4    Ready   3m9s

NAME                                  HEALTH   NODES   VERSION   AGE
kibana.kibana.k8s.elastic.co/kibana   green    1       7.13.4    3m8s

NAME                                HEALTH   AVAILABLE   EXPECTED   TYPE       VERSION   AGE
beat.beat.k8s.elastic.co/filebeat   green    1           1          filebeat   7.13.4    3m7s

NAME                                          READY   STATUS    RESTARTS   AGE
pod/logstash-6f9f86fd95-kdrxj                 1/1     Running   0          2m53s
pod/filebeat-beat-filebeat-7d4b56b99b-8zfjz   1/1     Running   0          2m53s
pod/elasticsearch-es-default-0                1/1     Running   0          2m52s
pod/elasticsearch-es-default-2                1/1     Running   0          2m52s
pod/elasticsearch-es-default-1                1/1     Running   0          2m52s
pod/kibana-kb-6594666476-d6jjw                1/1     Running   0          2m51s

Otherwise, to debug the issue, the events displayed at the end of the kubectl describe command often provide useful information to understand why the resources are not being deployed. Could you share the output of the following commands?

microk8s kubectl get elastic,po
microk8s kubectl describe es | grep -A99 Events:
microk8s kubectl describe beats | grep -A99 Events:
microk8s kubectl describe kb | grep -A99 Events:
microk8s kubectl describe po | grep -A99 Events:
RobertDiebels commented 3 years ago

@thbkrkr I did. As well as the helm3 add-on.

thbkrkr commented 3 years ago

I have checked and still no issues with the operator installed via Helm3.

Could you share the output of the commands I wrote above?

RobertDiebels commented 3 years ago

@thbkrkr I followed your commands (which installs 1.7.0) on a fresh VM and I encountered no issues there. However, the issue persists on the VM where I attempted the installation previously (which uses 1.6.0).

After removing microk8s and executing the following commands:

snap install microk8s --classic --channel=1.21/stable
microk8s enable dns
microk8s enable storage
microk8s enable helm3
microk8s helm3 repo add elastic https://helm.elastic.co
microk8s helm3 repo update
microk8s helm3 install elastic-operator elastic/eck-operator -n elastic-system --create-namespace --version 1.6.0
microk8s kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/1.6/config/recipes/logstash/logstash.yaml

The requested logs read as follows:

robert@robert-VirtualBox:~$ microk8s kubectl get pods
I0808 17:19:33.477235   19125 request.go:668] Waited for 1.172394497s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:16443/apis/maps.k8s.elastic.co/v1alpha1?timeout=32s
NAME                                      READY   STATUS                  RESTARTS   AGE
logstash-66fb76f688-jn9s5                 1/1     Running                 0          7m31s
elasticsearch-es-default-0                1/1     Running                 0          6m29s
elasticsearch-es-default-2                1/1     Running                 0          6m29s
elasticsearch-es-default-1                1/1     Running                 0          6m29s
kibana-kb-765596cdbf-8kzql                0/1     Running                 1          6m28s
filebeat-beat-filebeat-7c96fc78d6-ndvm8   0/1     Init:CrashLoopBackOff   4          6m32s
robert@robert-VirtualBox:~$ 
robert@robert-VirtualBox:~$ microk8s kubectl get elastic,po
I0808 17:19:56.898202   20103 request.go:668] Waited for 1.17090837s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:16443/apis/autoscaling/v1?timeout=32s
I0808 17:20:06.902743   20103 request.go:668] Waited for 8.395659212s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:16443/apis/apm.k8s.elastic.co/v1beta1?timeout=32s
^C
robert@robert-VirtualBox:~$ microk8s kubectl describe es | grep -A99 Events:
I0808 17:20:38.295736   21923 request.go:668] Waited for 1.183644717s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:16443/apis/apm.k8s.elastic.co/v1?timeout=32s
Events:
  Type     Reason      Age    From                      Message
  ----     ------      ----   ----                      -------
  Warning  Unexpected  2m40s  elasticsearch-controller  Could not update cluster license: while getting current license level Get "https://elasticsearch-es-http.default.svc:9200/_license": dial tcp 10.152.183.66:9200: connect: connection refused
  Warning  Unexpected  2m39s  elasticsearch-controller  Could not update cluster license: while getting current license level 404 Not Found:
  Warning  Unhealthy   2m33s  elasticsearch-controller  Elasticsearch cluster health degraded
robert@robert-VirtualBox:~$ microk8s kubectl describe beats | grep -A99 Events:
I0808 17:20:40.305026   22005 request.go:668] Waited for 1.167330727s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:16443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s
Events:            <none>
robert@robert-VirtualBox:~$ microk8s kubectl describe kb | grep -A99 Events:
I0808 17:20:41.740504   22118 request.go:668] Waited for 1.181468463s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:16443/apis/authentication.k8s.io/v1beta1?timeout=32s
Events:
  Type     Reason                   Age                    From                          Message
  ----     ------                   ----                   ----                          -------
  Normal   AssociationStatusChange  7m40s                  kb-es-association-controller  Association status changed from [] to [Pending]
  Warning  AssociationError         7m40s (x4 over 7m41s)  kibana-controller             Association backend for elasticsearch is not configured
  Normal   AssociationStatusChange  7m40s                  kb-es-association-controller  Association status changed from [Pending] to [Established]
robert@robert-VirtualBox:~$ microk8s kubectl describe po | grep -A99 Events:
I0808 17:20:52.740649   22586 request.go:668] Waited for 1.171952312s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:16443/apis/maps.k8s.elastic.co/v1alpha1?timeout=32s
Events:
  Type     Reason       Age                    From               Message
  ----     ------       ----                   ----               -------
  Normal   Scheduled    8m49s                  default-scheduler  Successfully assigned default/logstash-66fb76f688-jn9s5 to robert-virtualbox
  Warning  FailedMount  8m18s (x7 over 8m49s)  kubelet            MountVolume.SetUp failed for volume "ca-certs" : secret "elasticsearch-es-http-certs-public" not found
  Normal   Pulling      7m43s                  kubelet            Pulling image "docker.elastic.co/logstash/logstash:7.12.0"
  Normal   Pulled       4m54s                  kubelet            Successfully pulled image "docker.elastic.co/logstash/logstash:7.12.0" in 2m49.029505438s
  Normal   Created      4m52s                  kubelet            Created container logstash
  Normal   Started      4m51s                  kubelet            Started container logstash

Name:         elasticsearch-es-default-0
Namespace:    default
Priority:     0
Node:         robert-virtualbox/10.0.2.15
Start Time:   Sun, 08 Aug 2021 17:13:06 +0200
Labels:       common.k8s.elastic.co/type=elasticsearch
              controller-revision-hash=elasticsearch-es-default-696c47f6cd
              elasticsearch.k8s.elastic.co/cluster-name=elasticsearch
              elasticsearch.k8s.elastic.co/config-hash=3585688245
              elasticsearch.k8s.elastic.co/http-scheme=https
              elasticsearch.k8s.elastic.co/node-data=true
              elasticsearch.k8s.elastic.co/node-data_cold=true
              elasticsearch.k8s.elastic.co/node-data_content=true
              elasticsearch.k8s.elastic.co/node-data_hot=true
              elasticsearch.k8s.elastic.co/node-data_warm=true
              elasticsearch.k8s.elastic.co/node-ingest=true
              elasticsearch.k8s.elastic.co/node-master=true
              elasticsearch.k8s.elastic.co/node-ml=true
              elasticsearch.k8s.elastic.co/node-remote_cluster_client=true
              elasticsearch.k8s.elastic.co/node-transform=true
              elasticsearch.k8s.elastic.co/node-voting_only=false
              elasticsearch.k8s.elastic.co/statefulset-name=elasticsearch-es-default
              elasticsearch.k8s.elastic.co/version=7.12.0
              statefulset.kubernetes.io/pod-name=elasticsearch-es-default-0
Annotations:  cni.projectcalico.org/podIP: 10.1.100.73/32
              cni.projectcalico.org/podIPs: 10.1.100.73/32
              co.elastic.logs/module: elasticsearch
              update.k8s.elastic.co/timestamp: 2021-08-08T15:16:57.658600661Z
Status:       Running
IP:           10.1.100.73
IPs:
  IP:           10.1.100.73
Controlled By:  StatefulSet/elasticsearch-es-default
Init Containers:
  elastic-internal-init-filesystem:
    Container ID:  containerd://4d592f593ec90d1c7f7a01950cb5e4e0529a493090d7ac79973bc9d8e99ce7d0
    Image:         docker.elastic.co/elasticsearch/elasticsearch:7.12.0
    Image ID:      docker.elastic.co/elasticsearch/elasticsearch@sha256:4999c5f75c1d0d69754902d3975dd36875cc2eb4a06d7fdceaa8ec0e71a81dfa
    Port:          <none>
    Host Port:     <none>
    Command:
      bash
      -c
      /mnt/elastic-internal/scripts/prepare-fs.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 08 Aug 2021 17:16:54 +0200
      Finished:     Sun, 08 Aug 2021 17:16:56 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_IP:                  (v1:status.podIP)
      POD_NAME:               elasticsearch-es-default-0 (v1:metadata.name)
      NODE_NAME:               (v1:spec.nodeName)
      NAMESPACE:              default (v1:metadata.namespace)
      HEADLESS_SERVICE_NAME:  elasticsearch-es-default
    Mounts:
      /mnt/elastic-internal/downward-api from downward-api (ro)
      /mnt/elastic-internal/elasticsearch-bin-local from elastic-internal-elasticsearch-bin-local (rw)
      /mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
      /mnt/elastic-internal/elasticsearch-config-local from elastic-internal-elasticsearch-config-local (rw)
      /mnt/elastic-internal/elasticsearch-plugins-local from elastic-internal-elasticsearch-plugins-local (rw)
      /mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
      /mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
      /mnt/elastic-internal/transport-certificates from elastic-internal-transport-certificates (ro)
      /mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
      /mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
      /usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
      /usr/share/elasticsearch/config/transport-remote-certs/ from elastic-internal-remote-certificate-authorities (ro)
      /usr/share/elasticsearch/data from elasticsearch-data (rw)
      /usr/share/elasticsearch/logs from elasticsearch-logs (rw)
Containers:
  elasticsearch:
    Container ID:   containerd://6b69238edcc607446b6b2fd4c3fa655e7e77c7931d56ab59fc20977c23cc9f20
    Image:          docker.elastic.co/elasticsearch/elasticsearch:7.12.0
    Image ID:       docker.elastic.co/elasticsearch/elasticsearch@sha256:4999c5f75c1d0d69754902d3975dd36875cc2eb4a06d7fdceaa8ec0e71a81dfa
    Ports:          9200/TCP, 9300/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Sun, 08 Aug 2021 17:16:57 +0200
    Ready:          True
    Restart Count:  0
--
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  7m48s  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         7m46s  default-scheduler  Successfully assigned default/elasticsearch-es-default-0 to robert-virtualbox
  Normal   Pulling           7m43s  kubelet            Pulling image "docker.elastic.co/elasticsearch/elasticsearch:7.12.0"
  Normal   Pulled            4m     kubelet            Successfully pulled image "docker.elastic.co/elasticsearch/elasticsearch:7.12.0" in 3m42.845246621s
  Normal   Created           3m59s  kubelet            Created container elastic-internal-init-filesystem
  Normal   Started           3m58s  kubelet            Started container elastic-internal-init-filesystem
  Normal   Pulled            3m55s  kubelet            Container image "docker.elastic.co/elasticsearch/elasticsearch:7.12.0" already present on machine
  Normal   Created           3m55s  kubelet            Created container elasticsearch
  Normal   Started           3m55s  kubelet            Started container elasticsearch
  Warning  Unhealthy         3m41s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:11+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m36s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:16+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m31s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:21+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m26s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:26+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m21s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:31+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m16s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:36+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m11s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:41+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m6s   kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:46+00:00", "message": "readiness probe failed", "curl_rc": "7"}

Name:         elasticsearch-es-default-2
Namespace:    default
Priority:     0
Node:         robert-virtualbox/10.0.2.15
Start Time:   Sun, 08 Aug 2021 17:13:06 +0200
Labels:       common.k8s.elastic.co/type=elasticsearch
              controller-revision-hash=elasticsearch-es-default-696c47f6cd
              elasticsearch.k8s.elastic.co/cluster-name=elasticsearch
              elasticsearch.k8s.elastic.co/config-hash=3585688245
              elasticsearch.k8s.elastic.co/http-scheme=https
              elasticsearch.k8s.elastic.co/node-data=true
              elasticsearch.k8s.elastic.co/node-data_cold=true
              elasticsearch.k8s.elastic.co/node-data_content=true
              elasticsearch.k8s.elastic.co/node-data_hot=true
              elasticsearch.k8s.elastic.co/node-data_warm=true
              elasticsearch.k8s.elastic.co/node-ingest=true
              elasticsearch.k8s.elastic.co/node-master=true
              elasticsearch.k8s.elastic.co/node-ml=true
              elasticsearch.k8s.elastic.co/node-remote_cluster_client=true
              elasticsearch.k8s.elastic.co/node-transform=true
              elasticsearch.k8s.elastic.co/node-voting_only=false
              elasticsearch.k8s.elastic.co/statefulset-name=elasticsearch-es-default
              elasticsearch.k8s.elastic.co/version=7.12.0
              statefulset.kubernetes.io/pod-name=elasticsearch-es-default-2
Annotations:  cni.projectcalico.org/podIP: 10.1.100.74/32
              cni.projectcalico.org/podIPs: 10.1.100.74/32
              co.elastic.logs/module: elasticsearch
              update.k8s.elastic.co/timestamp: 2021-08-08T15:16:58.294411428Z
Status:       Running
IP:           10.1.100.74
IPs:
  IP:           10.1.100.74
Controlled By:  StatefulSet/elasticsearch-es-default
Init Containers:
  elastic-internal-init-filesystem:
    Container ID:  containerd://a646b1960aa790b8c93c1e11447444a3516bc86580d10293d7af69b342a320d3
    Image:         docker.elastic.co/elasticsearch/elasticsearch:7.12.0
    Image ID:      docker.elastic.co/elasticsearch/elasticsearch@sha256:4999c5f75c1d0d69754902d3975dd36875cc2eb4a06d7fdceaa8ec0e71a81dfa
    Port:          <none>
    Host Port:     <none>
    Command:
      bash
      -c
      /mnt/elastic-internal/scripts/prepare-fs.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 08 Aug 2021 17:16:54 +0200
      Finished:     Sun, 08 Aug 2021 17:16:57 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_IP:                  (v1:status.podIP)
      POD_NAME:               elasticsearch-es-default-2 (v1:metadata.name)
      NODE_NAME:               (v1:spec.nodeName)
      NAMESPACE:              default (v1:metadata.namespace)
      HEADLESS_SERVICE_NAME:  elasticsearch-es-default
    Mounts:
      /mnt/elastic-internal/downward-api from downward-api (ro)
      /mnt/elastic-internal/elasticsearch-bin-local from elastic-internal-elasticsearch-bin-local (rw)
      /mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
      /mnt/elastic-internal/elasticsearch-config-local from elastic-internal-elasticsearch-config-local (rw)
      /mnt/elastic-internal/elasticsearch-plugins-local from elastic-internal-elasticsearch-plugins-local (rw)
      /mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
      /mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
      /mnt/elastic-internal/transport-certificates from elastic-internal-transport-certificates (ro)
      /mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
      /mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
      /usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
      /usr/share/elasticsearch/config/transport-remote-certs/ from elastic-internal-remote-certificate-authorities (ro)
      /usr/share/elasticsearch/data from elasticsearch-data (rw)
      /usr/share/elasticsearch/logs from elasticsearch-logs (rw)
--
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  7m48s  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         7m46s  default-scheduler  Successfully assigned default/elasticsearch-es-default-2 to robert-virtualbox
  Normal   Pulling           7m43s  kubelet            Pulling image "docker.elastic.co/elasticsearch/elasticsearch:7.12.0"
  Normal   Pulled            3m58s  kubelet            Successfully pulled image "docker.elastic.co/elasticsearch/elasticsearch:7.12.0" in 3m44.292044423s
  Normal   Created           3m58s  kubelet            Created container elastic-internal-init-filesystem
  Normal   Started           3m58s  kubelet            Started container elastic-internal-init-filesystem
  Normal   Pulled            3m54s  kubelet            Container image "docker.elastic.co/elasticsearch/elasticsearch:7.12.0" already present on machine
  Normal   Created           3m54s  kubelet            Created container elasticsearch
  Normal   Started           3m54s  kubelet            Started container elasticsearch
  Warning  Unhealthy         3m40s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:12+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m36s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:16+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m31s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:21+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m26s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:26+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m21s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:31+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m16s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:36+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m11s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:41+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m6s   kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:46+00:00", "message": "readiness probe failed", "curl_rc": "7"}

Name:         elasticsearch-es-default-1
Namespace:    default
Priority:     0
Node:         robert-virtualbox/10.0.2.15
Start Time:   Sun, 08 Aug 2021 17:13:08 +0200
Labels:       common.k8s.elastic.co/type=elasticsearch
              controller-revision-hash=elasticsearch-es-default-696c47f6cd
              elasticsearch.k8s.elastic.co/cluster-name=elasticsearch
              elasticsearch.k8s.elastic.co/config-hash=3585688245
              elasticsearch.k8s.elastic.co/http-scheme=https
              elasticsearch.k8s.elastic.co/node-data=true
              elasticsearch.k8s.elastic.co/node-data_cold=true
              elasticsearch.k8s.elastic.co/node-data_content=true
              elasticsearch.k8s.elastic.co/node-data_hot=true
              elasticsearch.k8s.elastic.co/node-data_warm=true
              elasticsearch.k8s.elastic.co/node-ingest=true
              elasticsearch.k8s.elastic.co/node-master=true
              elasticsearch.k8s.elastic.co/node-ml=true
              elasticsearch.k8s.elastic.co/node-remote_cluster_client=true
              elasticsearch.k8s.elastic.co/node-transform=true
              elasticsearch.k8s.elastic.co/node-voting_only=false
              elasticsearch.k8s.elastic.co/statefulset-name=elasticsearch-es-default
              elasticsearch.k8s.elastic.co/version=7.12.0
              statefulset.kubernetes.io/pod-name=elasticsearch-es-default-1
Annotations:  cni.projectcalico.org/podIP: 10.1.100.75/32
              cni.projectcalico.org/podIPs: 10.1.100.75/32
              co.elastic.logs/module: elasticsearch
              update.k8s.elastic.co/timestamp: 2021-08-08T15:16:58.246966643Z
Status:       Running
IP:           10.1.100.75
IPs:
  IP:           10.1.100.75
Controlled By:  StatefulSet/elasticsearch-es-default
Init Containers:
  elastic-internal-init-filesystem:
    Container ID:  containerd://d40222dfb77e0df2f6fe14c4a3b20887466934f830da403514148efab9c93c4c
    Image:         docker.elastic.co/elasticsearch/elasticsearch:7.12.0
    Image ID:      docker.elastic.co/elasticsearch/elasticsearch@sha256:4999c5f75c1d0d69754902d3975dd36875cc2eb4a06d7fdceaa8ec0e71a81dfa
    Port:          <none>
    Host Port:     <none>
    Command:
      bash
      -c
      /mnt/elastic-internal/scripts/prepare-fs.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 08 Aug 2021 17:16:55 +0200
      Finished:     Sun, 08 Aug 2021 17:16:58 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_IP:                  (v1:status.podIP)
      POD_NAME:               elasticsearch-es-default-1 (v1:metadata.name)
      NODE_NAME:               (v1:spec.nodeName)
      NAMESPACE:              default (v1:metadata.namespace)
      HEADLESS_SERVICE_NAME:  elasticsearch-es-default
    Mounts:
      /mnt/elastic-internal/downward-api from downward-api (ro)
      /mnt/elastic-internal/elasticsearch-bin-local from elastic-internal-elasticsearch-bin-local (rw)
      /mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
      /mnt/elastic-internal/elasticsearch-config-local from elastic-internal-elasticsearch-config-local (rw)
      /mnt/elastic-internal/elasticsearch-plugins-local from elastic-internal-elasticsearch-plugins-local (rw)
      /mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
      /mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
      /mnt/elastic-internal/transport-certificates from elastic-internal-transport-certificates (ro)
      /mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
      /mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
      /usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
      /usr/share/elasticsearch/config/transport-remote-certs/ from elastic-internal-remote-certificate-authorities (ro)
      /usr/share/elasticsearch/data from elasticsearch-data (rw)
      /usr/share/elasticsearch/logs from elasticsearch-logs (rw)
--
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  7m48s  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  7m46s  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         7m44s  default-scheduler  Successfully assigned default/elasticsearch-es-default-1 to robert-virtualbox
  Normal   Pulling           7m42s  kubelet            Pulling image "docker.elastic.co/elasticsearch/elasticsearch:7.12.0"
  Normal   Pulled            3m57s  kubelet            Successfully pulled image "docker.elastic.co/elasticsearch/elasticsearch:7.12.0" in 3m45.174768331s
  Normal   Created           3m57s  kubelet            Created container elastic-internal-init-filesystem
  Normal   Started           3m57s  kubelet            Started container elastic-internal-init-filesystem
  Normal   Pulled            3m53s  kubelet            Container image "docker.elastic.co/elasticsearch/elasticsearch:7.12.0" already present on machine
  Normal   Created           3m53s  kubelet            Created container elasticsearch
  Normal   Started           3m53s  kubelet            Started container elasticsearch
  Warning  Unhealthy         3m38s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:14+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m33s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:19+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m28s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:24+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m23s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:29+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m18s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:34+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m13s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:39+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m8s   kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:44+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         3m3s   kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:49+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Warning  Unhealthy         2m58s  kubelet            Readiness probe failed: {"timestamp": "2021-08-08T15:17:54+00:00", "message": "readiness probe failed", "curl_rc": "7"}

Name:         kibana-kb-765596cdbf-8kzql
Namespace:    default
Priority:     0
Node:         robert-virtualbox/10.0.2.15
Start Time:   Sun, 08 Aug 2021 17:13:05 +0200
Labels:       common.k8s.elastic.co/type=kibana
              kibana.k8s.elastic.co/config-checksum=13369c837cc51987420092f4591c0da9170edb2035ff30a6ea7b3b84
              kibana.k8s.elastic.co/name=kibana
              kibana.k8s.elastic.co/version=7.12.0
              pod-template-hash=765596cdbf
Annotations:  cni.projectcalico.org/podIP: 10.1.100.71/32
              cni.projectcalico.org/podIPs: 10.1.100.71/32
              co.elastic.logs/module: kibana
Status:       Running
IP:           10.1.100.71
IPs:
  IP:           10.1.100.71
Controlled By:  ReplicaSet/kibana-kb-765596cdbf
Init Containers:
  elastic-internal-init-config:
    Container ID:  containerd://491822c71c8977322b26f2d945406b49efcca69a978881ad225ae85836462c0b
    Image:         docker.elastic.co/kibana/kibana:7.12.0
    Image ID:      docker.elastic.co/kibana/kibana@sha256:f002ce2456e37a45507e542a4791d87a0bedf7d7ec468d6a7aef85cd233eecc9
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/bin/env
      bash
      -c
      #!/usr/bin/env bash
      set -eux

      init_config_initialized_flag=/mnt/elastic-internal/kibana-config-local/elastic-internal-init-config.ok

      if [[ -f "${init_config_initialized_flag}" ]]; then
          echo "Kibana configuration already initialized."
        exit 0
      fi

      echo "Setup Kibana configuration"

      ln -sf /mnt/elastic-internal/kibana-config/* /mnt/elastic-internal/kibana-config-local/

      touch "${init_config_initialized_flag}"
      echo "Kibana configuration successfully prepared."

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 08 Aug 2021 17:14:35 +0200
      Finished:     Sun, 08 Aug 2021 17:14:35 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_IP:      (v1:status.podIP)
      POD_NAME:   kibana-kb-765596cdbf-8kzql (v1:metadata.name)
      NODE_NAME:   (v1:spec.nodeName)
      NAMESPACE:  default (v1:metadata.namespace)
    Mounts:
      /mnt/elastic-internal/http-certs from elastic-internal-http-certificates (ro)
      /mnt/elastic-internal/kibana-config from elastic-internal-kibana-config (ro)
      /mnt/elastic-internal/kibana-config-local from elastic-internal-kibana-config-local (rw)
      /usr/share/kibana/config/elasticsearch-certs from elasticsearch-certs (ro)
      /usr/share/kibana/data from kibana-data (rw)
Containers:
  kibana:
    Container ID:   containerd://81f228530c7b72c5e55ce99a23cca727b117e96eb1aee22783ab07faba91fd6c
    Image:          docker.elastic.co/kibana/kibana:7.12.0
    Image ID:       docker.elastic.co/kibana/kibana@sha256:f002ce2456e37a45507e542a4791d87a0bedf7d7ec468d6a7aef85cd233eecc9
    Port:           5601/TCP
--
Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  7m47s                   default-scheduler  Successfully assigned default/kibana-kb-765596cdbf-8kzql to robert-virtualbox
  Normal   Pulling    7m46s                   kubelet            Pulling image "docker.elastic.co/kibana/kibana:7.12.0"
  Normal   Pulled     6m20s                   kubelet            Successfully pulled image "docker.elastic.co/kibana/kibana:7.12.0" in 1m25.834973579s
  Normal   Created    6m18s                   kubelet            Created container elastic-internal-init-config
  Normal   Started    6m17s                   kubelet            Started container elastic-internal-init-config
  Normal   Pulled     6m16s                   kubelet            Container image "docker.elastic.co/kibana/kibana:7.12.0" already present on machine
  Normal   Created    6m16s                   kubelet            Created container kibana
  Normal   Started    6m16s                   kubelet            Started container kibana
  Warning  Unhealthy  2m37s (x21 over 5m56s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 503

Name:         filebeat-beat-filebeat-7c96fc78d6-ndvm8
Namespace:    default
Priority:     0
Node:         robert-virtualbox/10.0.2.15
Start Time:   Sun, 08 Aug 2021 17:13:01 +0200
Labels:       app.kubernetes.io/name=eck-logstash
              app.kubernets.io/component=filebeat
              beat.k8s.elastic.co/config-checksum=28ad2b644522f97677dea0752fcf4fd4f731ffaf4a05948dc503ab5b
              beat.k8s.elastic.co/name=filebeat
              beat.k8s.elastic.co/version=7.12.0
              common.k8s.elastic.co/type=beat
              pod-template-hash=7c96fc78d6
Annotations:  cni.projectcalico.org/podIP: 10.1.100.70/32
              cni.projectcalico.org/podIPs: 10.1.100.70/32
Status:       Pending
IP:           10.1.100.70
IPs:
  IP:           10.1.100.70
Controlled By:  ReplicaSet/filebeat-beat-filebeat-7c96fc78d6
Init Containers:
  download-tutorial:
    Container ID:  containerd://b8acd09f101af570ff31f9257be5c52991020febfb238b810263c10dfde13121
    Image:         curlimages/curl
    Image ID:      docker.io/curlimages/curl@sha256:a37d88a5b626208146c6d58f75b74a93c35dfc00b6b4676c4e9d2c166ea41954
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
    Args:
      -c
      curl -L https://download.elastic.co/demos/logstash/gettingstarted/logstash-tutorial.log.gz | gunzip -c > /data/logstash-tutorial.log
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 08 Aug 2021 17:20:28 +0200
      Finished:     Sun, 08 Aug 2021 17:20:33 +0200
    Ready:          False
    Restart Count:  5
    Environment:
      POD_IP:      (v1:status.podIP)
      POD_NAME:   filebeat-beat-filebeat-7c96fc78d6-ndvm8 (v1:metadata.name)
      NODE_NAME:   (v1:spec.nodeName)
      NAMESPACE:  default (v1:metadata.namespace)
    Mounts:
      /data from data (rw)
      /etc/beat.yml from config (ro,path="beat.yml")
      /usr/share/filebeat/data from beat-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m7d8j (ro)
Containers:
  filebeat:
    Container ID:  
    Image:         docker.elastic.co/beats/filebeat:7.12.0
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Args:
      -e
      -c
      /etc/beat.yml
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  200Mi
    Requests:
      cpu:        100m
      memory:     200Mi
    Environment:  <none>
    Mounts:
      /data from data (rw)
      /etc/beat.yml from config (ro,path="beat.yml")
      /usr/share/filebeat/data from beat-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m7d8j (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  beat-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
--
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  7m51s                  default-scheduler  Successfully assigned default/filebeat-beat-filebeat-7c96fc78d6-ndvm8 to robert-virtualbox
  Normal   Pulled     7m45s                  kubelet            Successfully pulled image "curlimages/curl" in 5.563023393s
  Normal   Pulled     3m56s                  kubelet            Successfully pulled image "curlimages/curl" in 3m41.910743824s
  Normal   Pulled     3m33s                  kubelet            Successfully pulled image "curlimages/curl" in 1.261616519s
  Normal   Pulled     3m                     kubelet            Successfully pulled image "curlimages/curl" in 1.124860152s
  Normal   Started    2m59s (x4 over 7m44s)  kubelet            Started container download-tutorial
  Warning  BackOff    2m15s (x7 over 3m49s)  kubelet            Back-off restarting failed container
  Normal   Pulling    2m3s (x5 over 7m50s)   kubelet            Pulling image "curlimages/curl"
  Normal   Pulled     2m2s                   kubelet            Successfully pulled image "curlimages/curl" in 1.250985334s
  Normal   Created    2m2s (x5 over 7m44s)   kubelet            Created container download-tutorial
thbkrkr commented 3 years ago

So, we have 2 pods not ready:

filebeat-beat-filebeat-7c96fc78d6-ndvm8   0/1     Init:CrashLoopBackOff   4          6m32s
kibana-kb-765596cdbf-8kzql                0/1     Running                 1          6m28s

For Filebeat, the init container download-tutorial which just executes curl ... | gunzip ... fails. For Kibana, the readiness probe fails with a 503 HTTP error.

At this stage, we need the logs of the containers to debug properly:

> microk8s kubectl logs filebeat-beat-filebeat-7c96fc78d6-ndvm8 -c download-tutorial
> microk8s kubectl logs kibana-kb-765596cdbf-8kzql

I wonder if this is not related to a network configuration issue in your virtual box.

RobertDiebels commented 3 years ago

@thbkrkr Sorry for the delay. I was otherwise occupied.

The command: > microk8s kubectl logs filebeat-beat-filebeat-7c96fc78d6-ndvm8 -c download-tutorial Results in:

I0816 14:14:23.714317   27439 request.go:668] Waited for 1.173367341s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:16443/apis/maps.k8s.elastic.co/v1alpha1?timeout=32s
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0curl: (6) Could not resolve host: download.elastic.co
gunzip: invalid magic

And the command: > microk8s kubectl logs kibana-kb-765596cdbf-8kzql Results in:

I0816 14:20:32.737388   13240 request.go:668] Waited for 1.146340896s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:16443/apis/maps.k8s.elastic.co/v1alpha1?timeout=32s
{"type":"log","@timestamp":"2021-08-16T12:18:40+00:00","tags":["info","plugins-service"],"pid":7,"message":"Plugin \"osquery\" is disabled."}
{"type":"log","@timestamp":"2021-08-16T12:18:40+00:00","tags":["warning","config","deprecation"],"pid":7,"message":"\"xpack.monitoring.ui.container.elasticsearch.enabled\" is deprecated and has been replaced by \"monitoring.ui.container.elasticsearch.enabled\""}
{"type":"log","@timestamp":"2021-08-16T12:18:40+00:00","tags":["warning","config","deprecation"],"pid":7,"message":"\"xpack.monitoring\" is deprecated and has been replaced by \"monitoring\". However both key are present, ignoring \"xpack.monitoring\""}
{"type":"log","@timestamp":"2021-08-16T12:18:40+00:00","tags":["warning","config","deprecation"],"pid":7,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0.\""}
{"type":"log","@timestamp":"2021-08-16T12:18:41+00:00","tags":["info","plugins-system"],"pid":7,"message":"Setting up [100] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,banners,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,newsfeed,mapsLegacy,kibanaLegacy,translations,legacyExport,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,home,observability,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,visualizations,visTypeVislib,visTypeVega,visTypeTimelion,features,licenseManagement,watcher,canvas,visTypeTagcloud,visTypeTable,visTypeMetric,visTypeMarkdown,tileMap,regionMap,visTypeXy,graph,timelion,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,inputControlVis,discover,discoverEnhanced,savedObjectsManagement,spaces,security,savedObjectsTagging,maps,lens,reporting,lists,encryptedSavedObjects,dataEnhanced,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,beatsManagement,transform,ingestPipelines,eventLog,actions,alerts,triggersActionsUi,stackAlerts,ml,securitySolution,case,infra,monitoring,logstash,apm,uptime]"}
{"type":"log","@timestamp":"2021-08-16T12:18:41+00:00","tags":["info","plugins","taskManager"],"pid":7,"message":"TaskManager is identified by the Kibana UUID: 55b253ce-534e-4237-ac4b-d8836d50b53b"}
{"type":"log","@timestamp":"2021-08-16T12:18:41+00:00","tags":["warning","plugins","reporting","config"],"pid":7,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.3.2011\n OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."}
{"type":"log","@timestamp":"2021-08-16T12:18:41+00:00","tags":["info","plugins","monitoring","monitoring"],"pid":7,"message":"config sourced from: production cluster"}
{"type":"log","@timestamp":"2021-08-16T12:18:42+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","@timestamp":"2021-08-16T12:18:42+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"Starting saved objects migrations"}
{"type":"log","@timestamp":"2021-08-16T12:18:42+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] INIT -> OUTDATED_DOCUMENTS_SEARCH"}
{"type":"log","@timestamp":"2021-08-16T12:18:42+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana] INIT -> OUTDATED_DOCUMENTS_SEARCH"}
{"type":"log","@timestamp":"2021-08-16T12:18:42+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH -> UPDATE_TARGET_MAPPINGS"}
{"type":"log","@timestamp":"2021-08-16T12:18:42+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
{"type":"log","@timestamp":"2021-08-16T12:18:42+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH -> UPDATE_TARGET_MAPPINGS"}
{"type":"log","@timestamp":"2021-08-16T12:18:42+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
{"type":"log","@timestamp":"2021-08-16T12:19:42+00:00","tags":["error","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] Action failed with 'no_shard_available_action_exception'. Retrying attempt 1 out of 10 in 2 seconds."}
{"type":"log","@timestamp":"2021-08-16T12:19:42+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
{"type":"log","@timestamp":"2021-08-16T12:19:43+00:00","tags":["error","savedobjects-service"],"pid":7,"message":"[.kibana] Action failed with 'no_shard_available_action_exception'. Retrying attempt 1 out of 10 in 2 seconds."}
{"type":"log","@timestamp":"2021-08-16T12:19:43+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
{"type":"log","@timestamp":"2021-08-16T12:19:45+00:00","tags":["error","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] Action failed with 'no_shard_available_action_exception'. Retrying attempt 2 out of 10 in 4 seconds."}
{"type":"log","@timestamp":"2021-08-16T12:19:45+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
{"type":"log","@timestamp":"2021-08-16T12:19:45+00:00","tags":["error","savedobjects-service"],"pid":7,"message":"[.kibana] Action failed with 'no_shard_available_action_exception'. Retrying attempt 2 out of 10 in 4 seconds."}
{"type":"log","@timestamp":"2021-08-16T12:19:45+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
{"type":"log","@timestamp":"2021-08-16T12:19:49+00:00","tags":["error","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] Action failed with 'no_shard_available_action_exception'. Retrying attempt 3 out of 10 in 8 seconds."}
{"type":"log","@timestamp":"2021-08-16T12:19:49+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
{"type":"log","@timestamp":"2021-08-16T12:19:49+00:00","tags":["error","savedobjects-service"],"pid":7,"message":"[.kibana] Action failed with 'no_shard_available_action_exception'. Retrying attempt 3 out of 10 in 8 seconds."}
{"type":"log","@timestamp":"2021-08-16T12:19:49+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
{"type":"log","@timestamp":"2021-08-16T12:19:57+00:00","tags":["error","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] Action failed with 'no_shard_available_action_exception'. Retrying attempt 4 out of 10 in 16 seconds."}
{"type":"log","@timestamp":"2021-08-16T12:19:57+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
{"type":"log","@timestamp":"2021-08-16T12:19:57+00:00","tags":["error","savedobjects-service"],"pid":7,"message":"[.kibana] Action failed with 'no_shard_available_action_exception'. Retrying attempt 4 out of 10 in 16 seconds."}
{"type":"log","@timestamp":"2021-08-16T12:19:57+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
{"type":"log","@timestamp":"2021-08-16T12:20:13+00:00","tags":["error","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] Action failed with 'no_shard_available_action_exception'. Retrying attempt 5 out of 10 in 32 seconds."}
{"type":"log","@timestamp":"2021-08-16T12:20:13+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
{"type":"log","@timestamp":"2021-08-16T12:20:13+00:00","tags":["error","savedobjects-service"],"pid":7,"message":"[.kibana] Action failed with 'no_shard_available_action_exception'. Retrying attempt 5 out of 10 in 32 seconds."}
{"type":"log","@timestamp":"2021-08-16T12:20:13+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}

I tried to ping download.elastic.co on both the host-machine and the VM. Neither yielded received packages. So this might be related to my client's network. I will inquire with them.

As for the kibana logs I would love to hear your thoughs.

thbkrkr commented 3 years ago

As for the kibana logs I would love to hear your thoughs.

no_shard_available_action_exception on indexes .kibana and .kibana_task_manager. Often this is because Elasticsearch is not ready but yours is ok, right? What does the output of _cat/health and _cat/indices?v from Elasticsearch show?

RobertDiebels commented 2 years ago

As for the kibana logs I would love to hear your thoughs.

no_shard_available_action_exception on indexes .kibana and .kibana_task_manager. Often this is because Elasticsearch is not ready but yours is ok, right? What does the output of _cat/health and _cat/indices?v from Elasticsearch show?

Again sorry for the late reply. Since my last reply I've changed employers and I no longer have access to the environment which produced this issue. As I am unable to reproduce this on my current machine I'm going to close this issue as I have no way to provide you with the logging you need.