Closed rickey318 closed 5 months ago
You can try redefining the image version for nginx proxy. kubectl set image deploy "deploy name" "containername"=nginx:1.25.0 kubectl rollout status deploy "deploy name"
I'm having the same error:
magma@magma-orc8r:~/magma-galaxy$ kubectl get pods NAME READY STATUS RESTARTS AGE elasticsearch-master-0 1/1 Running 0 25m fluentd-68ff5cf69-zt7l8 1/1 Running 0 25m haproxy-6b4d4d4f5d-rxpsb 1/1 Running 0 25m nms-magmalte-7dbcff786c-c756x 1/1 Running 0 25m nms-nginx-proxy-5bbddbdff-n8zzk 0/1 CrashLoopBackOff 9 (2m18s ago) 25m orc8r-accessd-587d9688cc-sj5lt 1/1 Running 0 25m orc8r-alertmanager-846669c9df-9tt5x 1/1 Running 0 25m orc8r-alertmanager-configurer-699bbff47d-sffds 1/1 Running 0 25m orc8r-analytics-6c89dc5879-xztnt 1/1 Running 0 25m orc8r-base-acct-647bf7d4bc-gmbr7 1/1 Running 0 25m orc8r-bootstrapper-68dbdb4684-5nwcg 1/1 Running 0 25m orc8r-certifier-584df87b4b-68m2b 1/1 Running 0 25m orc8r-configurator-84477697d6-4wv5b 1/1 Running 0 25m orc8r-ctraced-6fb69659b9-rxxnb 1/1 Running 0 25m orc8r-cwf-54c5f65b7d-4xwdc 1/1 Running 0 25m orc8r-device-84ddcf4f86-qzrdv 1/1 Running 0 25m orc8r-directoryd-8dd7c6fc-6l2hg 1/1 Running 0 25m orc8r-dispatcher-77588fbdd9-x79ss 1/1 Running 0 25m orc8r-eventd-54f8c78b6c-x9fmr 1/1 Running 0 25m orc8r-feg-6f767b4997-q65gs 1/1 Running 0 25m orc8r-feg-relay-7469dd6668-xj5mn 1/1 Running 0 25m orc8r-ha-6b89d68c4c-g55ld 1/1 Running 0 25m orc8r-health-65f5cbbddf-7c7cl 1/1 Running 0 25m orc8r-lte-946667767-s2djb 1/1 Running 0 25m orc8r-metricsd-644fbf574-kxrsw 1/1 Running 0 25m orc8r-nginx-669d7c9d95-9ch29 1/1 Running 0 25m orc8r-nprobe-66f6f657f-xn6hd 1/1 Running 0 25m orc8r-obsidian-654647b9fd-9hfzb 1/1 Running 0 25m orc8r-orc8r-worker-6875db6496-wndrm 1/1 Running 0 25m orc8r-orchestrator-5f8687f8d5-nndxn 1/1 Running 0 25m orc8r-policydb-5d7685b8cd-r29qw 1/1 Running 0 25m orc8r-prometheus-cache-b564698d6-z45l2 1/1 Running 0 25m orc8r-prometheus-configurer-76f58ffc9-l8qj6 1/1 Running 0 25m orc8r-prometheus-dd9854b66-7s4px 1/1 Running 0 25m orc8r-service-registry-6b9969d55d-x7n7b 1/1 Running 0 25m orc8r-smsd-56948b9588-jrng5 1/1 Running 0 25m orc8r-state-94bd7d7df-tf2dx 1/1 Running 0 25m orc8r-streamer-85ddddfd9b-55ppp 1/1 Running 0 25m orc8r-subscriberdb-765b6b6c4d-kpmqg 1/1 Running 0 25m orc8r-subscriberdb-cache-f658c5dbd-qhpwz 1/1 Running 0 25m orc8r-tenants-55d46c55c4-9jpvj 1/1 Running 0 25m orc8r-user-grafana-7df979c676-67ctt 1/1 Running 0 25m postgresql-0 1/1 Running 0 26m
nginx-how-to-fix-ssl-directive-is-deprecated-use-listen-ssl
sudo su - magma
kubectl -n orc8r edit configmap nginx-proxy-etc
Then delete ssl on
and replace listen 443
with listen 443 ssl
Restart the container:
kubectl -n orc8r delete pod {nms-nginx-proxy-pod-name}
I solved with: kubectl set image deploy "nms-nginx-proxy" "nms-nginx"=nginx:1.25.0 kubectl rollout status deploy "nms-nginx-proxy"
Thank @MAlexVR,
Have everything showing running now. But now trying to figure out how to a access the site? Any help on that part?
Hello,
please help me to solve following issue.
kubectl describe pods elasticsearch-master-0
Name: elasticsearch-master-0
Namespace: orc8r
Priority: 0
Service Account: default
Node: orc8r-control-plane/172.18.0.2
Start Time: Tue, 13 Aug 2024 10:55:39 +0200
Labels: app=elasticsearch-master
chart=elasticsearch
controller-revision-hash=elasticsearch-master-9dfcb7fd9
release=elasticsearch
statefulset.kubernetes.io/pod-name=elasticsearch-master-0
Annotations:
START_FILE=/tmp/.es_start_file
export NSS_SDB_USE_CACHE=no
http () { local path="${1}" local args="${2}" set -- -XGET -s
if [ "$args" != "" ]; then set -- "$@" $args fi
if [ -n "${ELASTIC_PASSWORD}" ]; then set -- "$@" -u "elastic:${ELASTIC_PASSWORD}" fi
curl --output /dev/null -k "$@" "http://127.0.0.1:9200${path}" }
if [ -f "${START_FILE}" ]; then echo 'Elasticsearch is already running, lets check the node is healthy' HTTP_CODE=$(http "/" "-w %{http_code}") RC=$? if [[ ${RC} -ne 0 ]]; then echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with RC ${RC}" exit ${RC} fi
if [[ ${HTTP_CODE} == "200" ]]; then exit 0 elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then exit 0 else echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}" exit 1 fi
else
echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )'
if http "/_cluster/health?wait_for_status=green&timeout=1s" "--fail" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
exit 1
fi
fi
] delay=10s timeout=5s period=10s #success=3 #failure=3
Environment:
node.name: elasticsearch-master-0 (v1:metadata.name)
cluster.initial_master_nodes: elasticsearch-master-0,
discovery.seed_hosts: elasticsearch-master-headless
cluster.name: elasticsearch
network.host: 0.0.0.0
cluster.deprecation_indexing.enabled: false
node.data: false
node.ingest: false
node.master: true
node.ml: false
node.remote_cluster_client: false
discovery.type: single-node
cluster.initial_master_nodes:
Mounts:
/usr/share/elasticsearch/data from elasticsearch-master (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9kpcw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
elasticsearch-master:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elasticsearch-master-elasticsearch-master-0
ReadOnly: false
kube-api-access-9kpcw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
Warning Unhealthy 9s (x3 over 30s) kubelet Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" ) Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )
elasticsearch-master-0 0/1 Running 0 55s
Hello,
Followed the readme allow with a video on youtube. After installing and running "kubectl get pods -n orc8r" command. All services is running but one and it is giving me the following message:
nms-nginx-proxy-7454448447-9kxzr 0/1 CrashLoopBackOff 14 (61s ago) 31m