Open itsecforu opened 3 years ago
nobody? :-(
Hi @itsecforu, I'd suggest running kubectl describe
on pod vault-0, and on the pvc data-vault-0 to see if the Events say why it's still in a pending state.
Keep in mind that uninstalling the helm chart doesn't delete pvc's, so you may want to manually delete those to start fresh:
kubectl delete pvc data-vault-0 data-vault-1 data-vault-2
And also also encourage you to bring this type of question to our discuss forum, since they'll be seen by a wider audience there.
kubectl describe pod vault-0 -n vault
Name: vault-0
Namespace: vault
Priority: 0
Node: <none>
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/name=vault
component=server
controller-revision-hash=vault-86d855d95d
helm.sh/chart=vault-0.8.0
statefulset.kubernetes.io/pod-name=vault-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/vault
Containers:
vault:
Image: vault:1.5.4
Ports: 8200/TCP, 8201/TCP, 8202/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/sh
-ec
Args:
cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
[ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfi g.hcl;
[ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.h cl;
[ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageco nfig.hcl;
[ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageco nfig.hcl;
[ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /t mp/storageconfig.hcl;
[ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storag econfig.hcl;
/usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfi g.hcl
Readiness: exec [/bin/sh -ec vault status -tls-skip-verify] delay=5s timeou t=3s period=5s #success=1 #failure=2
Environment:
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_K8S_POD_NAME: vault-0 (v1:metadata.name)
VAULT_K8S_NAMESPACE: vault (v1:metadata.namespace)
VAULT_ADDR: http://127.0.0.1:8200
VAULT_API_ADDR: http://$(POD_IP):8200
SKIP_CHOWN: true
SKIP_SETCAP: true
HOSTNAME: vault-0 (v1:metadata.name)
VAULT_CLUSTER_ADDR: https://$(HOSTNAME).vault-internal:8201
HOME: /home/vault
Mounts:
/home/vault from home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from vault-token-99g68 (ro)
/vault/config from config (rw)
/vault/data from data (rw)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-vault-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vault-config
Optional: false
home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
vault-token-99g68:
Type: Secret (a volume populated by a Secret)
SecretName: vault-token-99g68
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6d default-scheduler 0/7 nodes are available: 7 pod has unbound immediate PersistentVolumeClaims.
kubectl get pvc -n vault
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-vault-0 Pending 60d
data-vault-1 Bound vault 10Gi RWO local-storage 6d
data-vault-2 Bound consul1 20Gi RWO local-storage 6d
after deleting data-vault-0
pod still pending.
What do I need to setup in values?
Guys plz help me to correct install this cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6d default-scheduler 0/7 nodes are available: 7 pod has unbound immediate PersistentVolumeClaims
This tells you that for some reason the PersistentVolumeClaim cannot be created. This doesn't have anything to do with vault, and instead only something to do with your setup for PersistentVolumeClaims
Hey! How to correnct setup PV for vault-0 pod in HA mode?
i get: