ShubhamTatvamasi / magma-galaxy

https://galaxy.ansible.com/shubhamtatvamasi/magma
BSD 3-Clause "New" or "Revised" License
13 stars 20 forks source link

CrashLoopBackOff for nms-nginx-proxy #13

Closed rickey318 closed 5 months ago

rickey318 commented 1 year ago

Hello,

Followed the readme allow with a video on youtube. After installing and running "kubectl get pods -n orc8r" command. All services is running but one and it is giving me the following message:

nms-nginx-proxy-7454448447-9kxzr 0/1 CrashLoopBackOff 14 (61s ago) 31m

asantos-vk commented 1 year ago

You can try redefining the image version for nginx proxy. kubectl set image deploy "deploy name" "containername"=nginx:1.25.0 kubectl rollout status deploy "deploy name"

MAlexVR commented 1 year ago

I'm having the same error:

magma@magma-orc8r:~/magma-galaxy$ kubectl get pods NAME READY STATUS RESTARTS AGE elasticsearch-master-0 1/1 Running 0 25m fluentd-68ff5cf69-zt7l8 1/1 Running 0 25m haproxy-6b4d4d4f5d-rxpsb 1/1 Running 0 25m nms-magmalte-7dbcff786c-c756x 1/1 Running 0 25m nms-nginx-proxy-5bbddbdff-n8zzk 0/1 CrashLoopBackOff 9 (2m18s ago) 25m orc8r-accessd-587d9688cc-sj5lt 1/1 Running 0 25m orc8r-alertmanager-846669c9df-9tt5x 1/1 Running 0 25m orc8r-alertmanager-configurer-699bbff47d-sffds 1/1 Running 0 25m orc8r-analytics-6c89dc5879-xztnt 1/1 Running 0 25m orc8r-base-acct-647bf7d4bc-gmbr7 1/1 Running 0 25m orc8r-bootstrapper-68dbdb4684-5nwcg 1/1 Running 0 25m orc8r-certifier-584df87b4b-68m2b 1/1 Running 0 25m orc8r-configurator-84477697d6-4wv5b 1/1 Running 0 25m orc8r-ctraced-6fb69659b9-rxxnb 1/1 Running 0 25m orc8r-cwf-54c5f65b7d-4xwdc 1/1 Running 0 25m orc8r-device-84ddcf4f86-qzrdv 1/1 Running 0 25m orc8r-directoryd-8dd7c6fc-6l2hg 1/1 Running 0 25m orc8r-dispatcher-77588fbdd9-x79ss 1/1 Running 0 25m orc8r-eventd-54f8c78b6c-x9fmr 1/1 Running 0 25m orc8r-feg-6f767b4997-q65gs 1/1 Running 0 25m orc8r-feg-relay-7469dd6668-xj5mn 1/1 Running 0 25m orc8r-ha-6b89d68c4c-g55ld 1/1 Running 0 25m orc8r-health-65f5cbbddf-7c7cl 1/1 Running 0 25m orc8r-lte-946667767-s2djb 1/1 Running 0 25m orc8r-metricsd-644fbf574-kxrsw 1/1 Running 0 25m orc8r-nginx-669d7c9d95-9ch29 1/1 Running 0 25m orc8r-nprobe-66f6f657f-xn6hd 1/1 Running 0 25m orc8r-obsidian-654647b9fd-9hfzb 1/1 Running 0 25m orc8r-orc8r-worker-6875db6496-wndrm 1/1 Running 0 25m orc8r-orchestrator-5f8687f8d5-nndxn 1/1 Running 0 25m orc8r-policydb-5d7685b8cd-r29qw 1/1 Running 0 25m orc8r-prometheus-cache-b564698d6-z45l2 1/1 Running 0 25m orc8r-prometheus-configurer-76f58ffc9-l8qj6 1/1 Running 0 25m orc8r-prometheus-dd9854b66-7s4px 1/1 Running 0 25m orc8r-service-registry-6b9969d55d-x7n7b 1/1 Running 0 25m orc8r-smsd-56948b9588-jrng5 1/1 Running 0 25m orc8r-state-94bd7d7df-tf2dx 1/1 Running 0 25m orc8r-streamer-85ddddfd9b-55ppp 1/1 Running 0 25m orc8r-subscriberdb-765b6b6c4d-kpmqg 1/1 Running 0 25m orc8r-subscriberdb-cache-f658c5dbd-qhpwz 1/1 Running 0 25m orc8r-tenants-55d46c55c4-9jpvj 1/1 Running 0 25m orc8r-user-grafana-7df979c676-67ctt 1/1 Running 0 25m postgresql-0 1/1 Running 0 26m

albukirky1 commented 1 year ago

nginx-how-to-fix-ssl-directive-is-deprecated-use-listen-ssl

sudo su - magma kubectl -n orc8r edit configmap nginx-proxy-etc Then delete ssl on and replace listen 443 with listen 443 ssl Restart the container: kubectl -n orc8r delete pod {nms-nginx-proxy-pod-name}

MAlexVR commented 1 year ago

I solved with: kubectl set image deploy "nms-nginx-proxy" "nms-nginx"=nginx:1.25.0 kubectl rollout status deploy "nms-nginx-proxy"

rickey318 commented 1 year ago

Thank @MAlexVR,

Have everything showing running now. But now trying to figure out how to a access the site? Any help on that part?

AtulSmahale commented 2 months ago

Hello,

please help me to solve following issue.

kubectl describe pods elasticsearch-master-0 Name: elasticsearch-master-0 Namespace: orc8r Priority: 0 Service Account: default Node: orc8r-control-plane/172.18.0.2 Start Time: Tue, 13 Aug 2024 10:55:39 +0200 Labels: app=elasticsearch-master chart=elasticsearch controller-revision-hash=elasticsearch-master-9dfcb7fd9 release=elasticsearch statefulset.kubernetes.io/pod-name=elasticsearch-master-0 Annotations: Status: Running IP: 10.244.0.52 IPs: IP: 10.244.0.52 Controlled By: StatefulSet/elasticsearch-master Init Containers: configure-sysctl: Container ID: containerd://b09667574bdc3d954edc0136e5d711ee25bb06bc18a2f1ee5bac7db2450fe736 Image: docker.elastic.co/elasticsearch/elasticsearch:7.17.3 Image ID: docker.elastic.co/elasticsearch/elasticsearch@sha256:8734ac48c10ff836a6d0c3d600297b453cb389e85fd26bb4ccb3d5a5bde7e554 Port: Host Port: Command: sysctl -w vm.max_map_count=262144 State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 13 Aug 2024 10:55:40 +0200 Finished: Tue, 13 Aug 2024 10:55:40 +0200 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9kpcw (ro) Containers: elasticsearch: Container ID: containerd://d6cd7046c0e9695695ab587b5f4efe20a370f0cadf139e167d47a8d63300a8de Image: docker.elastic.co/elasticsearch/elasticsearch:7.17.3 Image ID: docker.elastic.co/elasticsearch/elasticsearch@sha256:8734ac48c10ff836a6d0c3d600297b453cb389e85fd26bb4ccb3d5a5bde7e554 Ports: 9200/TCP, 9300/TCP Host Ports: 0/TCP, 0/TCP State: Running Started: Tue, 13 Aug 2024 10:55:41 +0200 Ready: False Restart Count: 0 Limits: cpu: 1 memory: 2Gi Requests: cpu: 1 memory: 2Gi Readiness: exec [bash -c set -e

If the node is starting up wait for the cluster to be ready (request params: "wait_for_status=green&timeout=1s" )

Once it has started only check that the node itself is responding

START_FILE=/tmp/.es_start_file

Disable nss cache to avoid filling dentry cache when calling curl

This is required with Elasticsearch Docker using nss < 3.52

export NSS_SDB_USE_CACHE=no

http () { local path="${1}" local args="${2}" set -- -XGET -s

if [ "$args" != "" ]; then set -- "$@" $args fi

if [ -n "${ELASTIC_PASSWORD}" ]; then set -- "$@" -u "elastic:${ELASTIC_PASSWORD}" fi

curl --output /dev/null -k "$@" "http://127.0.0.1:9200${path}" }

if [ -f "${START_FILE}" ]; then echo 'Elasticsearch is already running, lets check the node is healthy' HTTP_CODE=$(http "/" "-w %{http_code}") RC=$? if [[ ${RC} -ne 0 ]]; then echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with RC ${RC}" exit ${RC} fi

ready if HTTP code 200, 503 is tolerable if ES version is 6.x

if [[ ${HTTP_CODE} == "200" ]]; then exit 0 elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then exit 0 else echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}" exit 1 fi

else echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )' if http "/_cluster/health?wait_for_status=green&timeout=1s" "--fail" ; then touch ${START_FILE} exit 0 else echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )' exit 1 fi fi ] delay=10s timeout=5s period=10s #success=3 #failure=3 Environment: node.name: elasticsearch-master-0 (v1:metadata.name) cluster.initial_master_nodes: elasticsearch-master-0, discovery.seed_hosts: elasticsearch-master-headless cluster.name: elasticsearch network.host: 0.0.0.0 cluster.deprecation_indexing.enabled: false node.data: false node.ingest: false node.master: true node.ml: false node.remote_cluster_client: false discovery.type: single-node cluster.initial_master_nodes: Mounts: /usr/share/elasticsearch/data from elasticsearch-master (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9kpcw (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: elasticsearch-master: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: elasticsearch-master-elasticsearch-master-0 ReadOnly: false kube-api-access-9kpcw: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message


Normal Scheduled 50s default-scheduler Successfully assigned orc8r/elasticsearch-master-0 to orc8r-control-plane Normal Pulled 50s kubelet Container image "docker.elastic.co/elasticsearch/elasticsearch:7.17.3" already present on machine Normal Created 50s kubelet Created container configure-sysctl Normal Started 50s kubelet Started container configure-sysctl Normal Pulled 50s kubelet Container image "docker.elastic.co/elasticsearch/elasticsearch:7.17.3" already present on machine Normal Created 50s kubelet Created container elasticsearch Normal Started 49s kubelet Started container elasticsearch

Warning Unhealthy 9s (x3 over 30s) kubelet Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" ) Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )

elasticsearch-master-0 0/1 Running 0 55s