Closed jbpratt closed 2 years ago
@jbpratt try applying the scc from here https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/operator-openshift.yaml.
thanks for the reply @Rakshith-R , it seems to still be happening. Let me reset things to a fresh state and try manually installing via the manifests. By chance, do you know what resource the error resource name may not be empty
is referring to?
Hmm, seems I'm still getting the same results on a slightly different environment (microshift on Fedora34 server aarch64, not in podman) and using the files from clusters/examples/kubernetes/ceph/
. The operator starts without issue, but then things fail when trying to create a cluster. cc @rootfs as this was working for https://github.com/redhat-et/ushift-workload so this very well may be a small mistake on my part.
examples/kubernetes/ceph
❯ oc create -f crds.yaml -f common.yaml -f operator-openshift.yaml
customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephclients.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystemmirrors.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectrealms.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectzonegroups.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectzones.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephrbdmirrors.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/objectbucketclaims.objectbucket.io created
customresourcedefinition.apiextensions.k8s.io/objectbuckets.objectbucket.io created
customresourcedefinition.apiextensions.k8s.io/volumereplicationclasses.replication.storage.openshift.io created
customresourcedefinition.apiextensions.k8s.io/volumereplications.replication.storage.openshift.io created
customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created
namespace/rook-ceph created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-object-bucket created
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
clusterrole.rbac.authorization.k8s.io/rook-ceph-system created
role.rbac.authorization.k8s.io/rook-ceph-system created
clusterrole.rbac.authorization.k8s.io/rook-ceph-global created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
clusterrole.rbac.authorization.k8s.io/rook-ceph-object-bucket created
serviceaccount/rook-ceph-system created
rolebinding.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created
serviceaccount/rook-ceph-osd created
serviceaccount/rook-ceph-mgr created
serviceaccount/rook-ceph-cmd-reporter created
role.rbac.authorization.k8s.io/rook-ceph-osd created
clusterrole.rbac.authorization.k8s.io/rook-ceph-osd created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created
role.rbac.authorization.k8s.io/rook-ceph-mgr created
role.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created
podsecuritypolicy.policy/00-rook-privileged created
clusterrole.rbac.authorization.k8s.io/psp:rook created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-default-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter-psp created
serviceaccount/rook-csi-cephfs-plugin-sa created
serviceaccount/rook-csi-cephfs-provisioner-sa created
role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created
clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-plugin-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-provisioner-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created
serviceaccount/rook-csi-rbd-plugin-sa created
serviceaccount/rook-csi-rbd-provisioner-sa created
role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-plugin-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-provisioner-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created
role.rbac.authorization.k8s.io/rook-ceph-purge-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-purge-osd created
serviceaccount/rook-ceph-purge-osd created
securitycontextconstraints.security.openshift.io/rook-ceph created
securitycontextconstraints.security.openshift.io/rook-ceph-csi created
configmap/rook-ceph-operator-config created
deployment.apps/rook-ceph-operator created
examples/kubernetes/ceph
❯ oc create -f https://raw.githubusercontent.com/redhat-et/ushift-workload/master/rook/cluster-test.yaml
configmap/rook-config-override created
cephcluster.ceph.rook.io/my-cluster created
examples/kubernetes/ceph
❯ oc logs pod/csi-cephfsplugin-wqd67 -n rook-ceph driver-registrar
I0919 11:18:04.017245 1 main.go:113] Version: v2.2.0
I0919 11:18:04.025639 1 node_register.go:52] Starting Registration Server at: /registration/rook-ceph.cephfs.csi.ceph.com-reg.sock
I0919 11:18:04.026555 1 node_register.go:61] Registration Server started at: /registration/rook-ceph.cephfs.csi.ceph.com-reg.sock
I0919 11:18:04.027406 1 node_register.go:83] Skipping healthz server because HTTP endpoint is set to: ""
I0919 11:18:05.850063 1 main.go:80] Received GetInfo call: &InfoRequest{}
I0919 11:18:06.520343 1 main.go:90] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:false,Error:RegisterPlugin error -- plugin registration failed with err: error updating Node object with CSI driver node info: error updating node: timed out waiting for the condition; caused by: resource name may not be empty,}
E0919 11:18:06.520484 1 main.go:92] Registration process failed with error: RegisterPlugin error -- plugin registration failed with err: error updating Node object with CSI driver node info: error updating node: timed out waiting for the condition; caused by: resource name may not be empty, restarting registration container.
@jbpratt are you still facing any issue or is it resolved?
Hi @yati1998, I believe I am, I can try to reproduce it again this afternoon to be sure. I haven't found a fix at least.
Going ahead and closing this out since I don't have time to reproduce it. If someone else can, or I can at a later date, we can re-open this or a new issue :smiley_cat:
Describe the bug
After installing and creating a test cluster, all pods come up except for the driver-registrar containers for rbd and fs plugins.
Environment details
5.13.12-200.fc34.aarch64
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) : unsureSteps to reproduce
I'm running quite the unconventional setup, so it may not be the easiest to reproduce. I'm running microshift in Podman on a RPi4 running Fedora CoreOS 34. Once up and running, I install ceph via Helm then try to create a cluster with this manifest but the plugin containers never register. I'm still learning Ceph so I imagine this is just a mistake somewhere on my part. I'm sorry this isn't the easiest to reproduce. One thing to note is I had to remove the arg for pids to work around a different cgroups issue I'm having.
Actual results
All pods start except the two plugins, they both go into a back-off state with the same error message.
Expected behavior
All pods to start successfully :)
Logs
I was unsure of whether this should be opened here or in https://github.com/kubernetes-csi/node-driver-registrar so I'm happy to reopen it there if needed.