opensearch-project / helm-charts

:wheel_of_dharma: A community repository for Helm Charts of OpenSearch Project.
https://opensearch.org/docs/latest/opensearch/install/helm/
Apache License 2.0
170 stars 228 forks source link

Opensearch pods are going into Crashloopbackoff and not showing any logs #527

Open ankitdahiya07 opened 6 months ago

ankitdahiya07 commented 6 months ago

Description Opensearch pods are going into Crashloopbackoff and not showing any logs

k -n oracle-monitoring describe po opensearch-cluster-master-0 Name: opensearch-cluster-master-0 Namespace: oracle-monitoring Priority: 0 Node: Start Time: Mon, 18 Mar 2024 07:08:45 +0000 Labels: app.kubernetes.io/component=opensearch-cluster-master app.kubernetes.io/instance=opensearch app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=opensearch-helm-chart app.kubernetes.io/version=2.4.0 controller-revision-hash=opensearch-cluster-master-7cdfc59568 helm.sh/chart=opensearch-helm-chart-2.8.0 statefulset.kubernetes.io/pod-name=opensearch-cluster-master-0 Annotations: configchecksum: e276d983c059baea23f2e003f99113d37fb87b49e9cd1c6c66b05e63d43495b Status: Running IP: IPs: IP: Controlled By: StatefulSet/opensearch-cluster-master Init Containers: fsgroup-volume: Container ID: cri-o://5b4c96c31ce1598af4705618ebd83b2bee6354908094ee7c970fc6b23eb96e6b Image: tabxcnoper01.snlhrprshared1.gbucdsint02lhr.oraclevcn.com/patchset5/opensearch-busybox:2.4.0 Image ID: occ-harbor.oraclecorp.com/samuthul/opensearch/busybox@sha256:51de9138b0cc394c813df84f334d638499333cac22edd05d0300b2c9a2dc80dd Port: Host Port: Command: sh -c Args: chown -R 1000:1000 /usr/share/opensearch/data State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 18 Mar 2024 07:08:46 +0000 Finished: Mon, 18 Mar 2024 07:08:46 +0000 Ready: True Restart Count: 0 Environment: Mounts: /usr/share/opensearch/data from opensearch-cluster-master (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rxrx (ro) sysctl: Container ID: cri-o://59b1a917cba7f4590e33c4c4b81afa38627492d55f412ee01d1f045735a5a3e0 Image: tabxcnoper01.snlhrprshared1.gbucdsint02lhr.oraclevcn.com/patchset5/opensearch-busybox:2.4.0 Image ID: occ-harbor.oraclecorp.com/samuthul/opensearch/busybox@sha256:51de9138b0cc394c813df84f334d638499333cac22edd05d0300b2c9a2dc80dd Port: Host Port: Command: sh -c set -xe DESIRED="262144" CURRENT=$(sysctl -n vm.max_map_count) if [ "$DESIRED" -gt "$CURRENT" ]; then sysctl -w vm.max_map_count=$DESIRED fi

State:          Terminated
  Reason:       Completed
  Exit Code:    0
  Started:      Mon, 18 Mar 2024 07:08:47 +0000
  Finished:     Mon, 18 Mar 2024 07:08:47 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
  /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rxrx (ro)

Containers: opensearch-helm-chart: Container ID: cri-o://debea1e04684b0d630b969502f00a660e82ed21c8222997f5daf409c8ebbac0c Image: tabxcnoper01.snlhrprshared1.gbucdsint02lhr.oraclevcn.com/patchset5/opensearch-busybox:2.4.0 Image ID: occ-harbor.oraclecorp.com/samuthul/opensearch/busybox@sha256:51de9138b0cc394c813df84f334d638499333cac22edd05d0300b2c9a2dc80dd Ports: 9200/TCP, 9300/TCP Host Ports: 0/TCP, 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 18 Mar 2024 07:35:08 +0000 Finished: Mon, 18 Mar 2024 07:35:08 +0000 Ready: False Restart Count: 10 Readiness: tcp-socket :9200 delay=0s timeout=3s period=5s #success=1 #failure=3 Startup: tcp-socket :9200 delay=15s timeout=3s period=10s #success=1 #failure=30 Environment: node.name: opensearch-cluster-master-0 (v1:metadata.name) cluster.initial_master_nodes: opensearch-cluster-master-0,opensearch-cluster-master-1,opensearch-cluster-master-2, discovery.seed_hosts: opensearch-cluster-master-headless cluster.name: opensearch-cluster network.host: 0.0.0.0 OPENSEARCH_JAVA_OPTS: -Xmx512M -Xms512M node.roles: master,ingest,data,remote_cluster_client, Mounts: /usr/share/opensearch/config/opensearch.yml from config (rw,path="opensearch.yml") /usr/share/opensearch/data from opensearch-cluster-master (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rxrx (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: opensearch-cluster-master: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: opensearch-cluster-master-opensearch-cluster-master-0 ReadOnly: false config: Type: ConfigMap (a volume populated by a ConfigMap) Name: opensearch-cluster-master-config Optional: false kube-api-access-5rxrx: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message


Warning FailedScheduling 31m default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling. Normal Scheduled 31m default-scheduler Successfully assigned oracle-monitoring/opensearch-cluster-master-0 to lhr-410 Normal Pulled 31m kubelet Container image "tabxcnoper01.snlhrprshared1.gbucdsint02lhr.oraclevcn.com/patchset5/opensearch-busybox:2.4.0" already present on machine Normal Created 31m kubelet Created container fsgroup-volume Normal Started 31m kubelet Started container fsgroup-volume Normal Pulled 31m kubelet Container image "tabxcnoper01.snlhrprshared1.gbucdsint02lhr.oraclevcn.com/patchset5/opensearch-busybox:2.4.0" already present on machine Normal Created 31m kubelet Created container sysctl Normal Started 31m kubelet Started container sysctl Normal Pulled 30m (x3 over 31m) kubelet Container image "tabxcnoper01.snlhrprshared1.gbucdsint02lhr.oraclevcn.com/patchset5/opensearch-busybox:2.4.0" already present on machine Normal Created 30m (x3 over 31m) kubelet Created container opensearch-helm-chart Normal Started 30m (x3 over 31m) kubelet Started container opensearch-helm-chart Warning BackOff 75s (x163 over 31m) kubelet Back-off restarting failed container

Chart Name appVersion: 2.4.0

Screenshots If applicable, add screenshots to help explain your problem.

Host/Environment (please complete the following information):

Additional context Pods are going in crashloopbackoff state and container opensearch-helm-chart is not showing any logs.

rishabh6788 commented 6 months ago

@prudhvigodithi please take a look.