kubernetes-retired / external-storage

[EOL] External storage plugins, provisioners, and helper libraries
Apache License 2.0
2.7k stars 1.6k forks source link

pvc pending state #1257

Closed itsecforu closed 4 years ago

itsecforu commented 4 years ago

I got k8s cluster and exthernal ceph cluster. But my pvc still pending..

kubectl get pvc myclaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myclaim Pending fast-rbd 9m49s

fast-rbd:

{ "kind": "StorageClass", "apiVersion": "storage.k8s.io/v1", "metadata": { "name": "fast-rbd", "selfLink": "/apis/storage.k8s.io/v1/storageclasses/fast-rbd", "uid": "a576e2e8-8227-455e-a021-1763e2f320c8", "resourceVersion": "10239043", "creationTimestamp": "2019-11-15T13:00:17Z" }, "provisioner": "ceph.com/rbd", "parameters": { "adminId": "admin", "adminSecretName": "ceph-secret", "adminSecretNamespace": "kube-system", "imageFeatures": "layering", "imageFormat": "2", "monitors": "<monitor-1-ip>:6789, <monitor-2-ip>:6789, <monitor-3-ip>:6789", "pool": "kube", "userId": "kube", "userSecretName": "ceph-secret-kube", "userSecretNamespace": "kube-system" }, "reclaimPolicy": "Delete", "volumeBindingMode": "Immediate" }

rbd-provisioner:

{ "kind": "Pod", "apiVersion": "v1", "metadata": { "name": "rbd-provisioner-667754f947-h4f5d", "generateName": "rbd-provisioner-667754f947-", "namespace": "kube-system", "selfLink": "/api/v1/namespaces/kube-system/pods/rbd-provisioner-667754f947-h4f5d", "uid": "7cf026eb-910a-4e7e-8cf7-fab4d1755a2d", "resourceVersion": "9408083", "creationTimestamp": "2019-11-11T14:15:53Z", "labels": { "app": "rbd-provisioner", "pod-template-hash": "667754f947" }, "ownerReferences": [ { "apiVersion": "apps/v1", "kind": "ReplicaSet", "name": "rbd-provisioner-667754f947", "uid": "93c166df-4985-4150-83d1-2c389714c877", "controller": true, "blockOwnerDeletion": true } ] }, "spec": { "volumes": [ { "name": "rbd-provisioner-token-7llh7", "secret": { "secretName": "rbd-provisioner-token-7llh7", "defaultMode": 420 } } ], "containers": [ { "name": "rbd-provisioner", "image": "quay.io/external_storage/rbd-provisioner:latest", "env": [ { "name": "PROVISIONER_NAME", "value": "ceph.com/rbd" } ], "resources": {}, "volumeMounts": [ { "name": "rbd-provisioner-token-7llh7", "readOnly": true, "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "Always" } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "serviceAccountName": "rbd-provisioner", "serviceAccount": "rbd-provisioner", "nodeName": "worker3", "securityContext": {}, "schedulerName": "default-scheduler", "tolerations": [ { "key": "node.kubernetes.io/not-ready", "operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300 }, { "key": "node.kubernetes.io/unreachable", "operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300 } ], "priority": 0, "enableServiceLinks": true }, "status": { "phase": "Running", "conditions": [ { "type": "Initialized", "status": "True", "lastProbeTime": null, "lastTransitionTime": "2019-11-11T14:15:53Z" }, { "type": "Ready", "status": "True", "lastProbeTime": null, "lastTransitionTime": "2019-11-11T14:15:56Z" }, { "type": "ContainersReady", "status": "True", "lastProbeTime": null, "lastTransitionTime": "2019-11-11T14:15:56Z" }, { "type": "PodScheduled", "status": "True", "lastProbeTime": null, "lastTransitionTime": "2019-11-11T14:15:53Z" } ], "hostIP": "10.2.67.206", "podIP": "10.233.116.58", "startTime": "2019-11-11T14:15:53Z", "containerStatuses": [ { "name": "rbd-provisioner", "state": { "running": { "startedAt": "2019-11-11T14:15:56Z" } }, "lastState": {}, "ready": true, "restartCount": 0, "image": "quay.io/external_storage/rbd-provisioner:latest", "imageID": "docker-pullable://quay.io/external_storage/rbd-provisioner@sha256:94fd36b8625141b62ff1addfa914d45f7b39619e55891bad0294263ecd2ce09a", "containerID": "docker://32d65c0533496f87bc723df6701a6eb6eada6019e1d09adc3ea53f4ec354f74b" } ], "qosClass": "BestEffort" } }

logs:

I1115 13:04:08.903590 1 controller.go:987] provision "kube-system/myclaim" class "fast-rbd": started I1115 13:04:08.919257 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"kube-system", Name:"myclaim", UID:"6b27c7e6-af0e-4c4b-8db6-987e72b29c9c", APIVersion:"v1", ResourceVersion:"10239058", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "kube-system/myclaim" W1115 13:04:08.924567 1 controller.go:746] Retrying syncing claim "kube-system/myclaim" because failures 4 < threshold 15 E1115 13:04:08.924626 1 controller.go:761] error syncing claim "kube-system/myclaim": failed to provision volume with StorageClass "fast-rbd": missing Ceph monitors I1115 13:04:08.924854 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"kube-system", Name:"myclaim", UID:"6b27c7e6-af0e-4c4b-8db6-987e72b29c9c", APIVersion:"v1", ResourceVersion:"10239058", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "fast-rbd": missing Ceph monitors

Seems as missing monitors, but i got this in yaml: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast-rbd provisioner: ceph.com/rbd parameters: monitors: 10.2.67.202:6789, 10.2.67.211:6789, 10.2.67.212:6789 adminId: admin adminSecretName: ceph-secret adminSecretNamespace: kube-system pool: kube userId: kube userSecretName: ceph-secret-kube userSecretNamespace: kube-system imageFormat: "2" imageFeatures: layering EOF

Please help me!

thx

kifeo commented 4 years ago

I can see : 'missing Ceph monitors' From the Storage class fast-rbd json :

is this intended ? (I wonder as you gave the ips in the description)

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-incubator/external-storage/issues/1257#issuecomment-646617764): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.