Warning FailedScheduling 2m (x2 over 2m) default-scheduler pod has unbound PersistentVolumeClaims
Normal Scheduled 2m default-scheduler Successfully assigned default/deployment-lustre-5ffb8566b7-d2wwx to node1
Normal Pulled 2m kubelet, node1 Container image "os-harbor-svc.default.svc.cloudos:443/helm/nginx:1.14-alpine" already present on machine
Normal Created 2m kubelet, node1 Created container
Warning Failed 53s kubelet, node1 Error: context deadline exceeded
YAML: apiVersion: v1 kind: PersistentVolume metadata: name: lustre-csi-pv labels: lustre-pvname: lustre-csi-pv spec: capacity: storage: 300Gi accessModes:
ReadWriteOnce persistentVolumeReclaimPolicy: Recycle csi: driver: lustreplugin.csi.h3c.com
set volumeHandle same value pv name
volumeHandle: lustre-csi-pv volumeAttributes: server: 186.31.29.29@tcp subPath: /lustre2 options: lustre
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lustre-pvc
namespace: kube-system
spec: accessModes:
ReadWriteOnce resources: requests: storage: 300Gi selector: matchLabels: lustre-pvname: lustre-csi-pv
apiVersion: apps/v1 kind: Deployment metadata: name: deployment-lustre labels: app: nginx
namespace: kube-system
spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: nodeSelector: kubernetes.io/hostname: node1
serviceAccount: lustre-node-sa
serviceAccountName: lustre-node-sa
containers:
name: lustre-pvc persistentVolumeClaim: claimName: lustre-pvc
error:
Warning FailedScheduling 2m (x2 over 2m) default-scheduler pod has unbound PersistentVolumeClaims Normal Scheduled 2m default-scheduler Successfully assigned default/deployment-lustre-5ffb8566b7-d2wwx to node1 Normal Pulled 2m kubelet, node1 Container image "os-harbor-svc.default.svc.cloudos:443/helm/nginx:1.14-alpine" already present on machine Normal Created 2m kubelet, node1 Created container Warning Failed 53s kubelet, node1 Error: context deadline exceeded
oc get node node1 NotReady compute,infra,master 60d v1.11.0+d4cacc0 node2 Ready compute,infra,master 60d v1.11.0+d4cacc0 node3 Ready compute,infra,master 60d v1.11.0+d4cacc0
the status of node1 is always changing between Ready and NotReady ,I don't known why it is?