ceph / ceph-csi

CSI driver for Ceph
Apache License 2.0
1.27k stars 539 forks source link

error starting driver-registar: resource name may not be empty #2515

Closed jbpratt closed 2 years ago

jbpratt commented 3 years ago

Describe the bug

After installing and creating a test cluster, all pods come up except for the driver-registrar containers for rbd and fs plugins.

Environment details

Steps to reproduce

I'm running quite the unconventional setup, so it may not be the easiest to reproduce. I'm running microshift in Podman on a RPi4 running Fedora CoreOS 34. Once up and running, I install ceph via Helm then try to create a cluster with this manifest but the plugin containers never register. I'm still learning Ceph so I imagine this is just a mistake somewhere on my part. I'm sorry this isn't the easiest to reproduce. One thing to note is I had to remove the arg for pids to work around a different cgroups issue I'm having.

Actual results

All pods start except the two plugins, they both go into a back-off state with the same error message.

Expected behavior

All pods to start successfully :)

Logs

Name:         rook-ceph
Namespace:    rook-ceph
Labels:       kustomize.toolkit.fluxcd.io/name=flux-system
              kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations:  kustomize.toolkit.fluxcd.io/checksum: 778a8ac8f3a67d3e67d78032dc525ea406b8a5ae
API Version:  helm.toolkit.fluxcd.io/v2beta1
Kind:         HelmRelease
Metadata:
  Creation Timestamp:  2021-09-16T02:13:39Z
  Finalizers:
    finalizers.fluxcd.io
  Generation:  3
...
  Resource Version:  4602
  Self Link:         /apis/helm.toolkit.fluxcd.io/v2beta1/namespaces/rook-ceph/helmreleases/rook-ceph
  UID:               eeb26bed-f466-49e7-9c8e-7097f12748da
Spec:
  Chart:
    Spec:
      Chart:  rook-ceph
      Source Ref:
        Kind:       HelmRepository
        Name:       rook-ceph-charts
        Namespace:  flux-system
      Version:      v1.7.3
  Interval:         5m
  Values:
    Csi:
      Plugin Tolerations:
        Operator:  Exists
    Resources:
      Limits:
        Cpu:     1000m
        Memory:  256Mi
      Requests:
        Cpu:     100m
        Memory:  128Mi
Status:
  Conditions:
    Last Transition Time:          2021-09-16T02:21:06Z
    Message:                       Release reconciliation succeeded
    Reason:                        ReconciliationSucceeded
    Status:                        True
    Type:                          Ready
    Last Transition Time:          2021-09-16T02:21:06Z
    Message:                       Helm upgrade succeeded
    Reason:                        UpgradeSucceeded
    Status:                        True
    Type:                          Released
  Helm Chart:                      flux-system/rook-ceph-rook-ceph
  Last Applied Revision:           v1.7.3
  Last Attempted Revision:         v1.7.3
  Last Attempted Values Checksum:  cbce94c46a471c9154d3f03c00643de2a99664ba
  Last Release Revision:           2
  Observed Generation:             3
Events:                            <none>
❯ oc describe -n rook-ceph pod/rook-ceph-operator-65794c6857-hp866
Name:         rook-ceph-operator-65794c6857-hp866
Namespace:    rook-ceph
Priority:     0
Node:         kitkat/192.168.1.70
Start Time:   Wed, 15 Sep 2021 21:13:52 -0500
Labels:       app=rook-ceph-operator
              chart=rook-ceph-v1.7.3
              pod-template-hash=65794c6857
Annotations:  <none>
Status:       Running
IP:           10.42.0.12
IPs:
  IP:           10.42.0.12
Controlled By:  ReplicaSet/rook-ceph-operator-65794c6857
Containers:
  rook-ceph-operator:
    Container ID:  cri-o://bb1e3d0debf5df144e24eb965922f7dc7cf9802c47700df30744494768116319
    Image:         rook/ceph:v1.7.3
    Image ID:      docker.io/rook/ceph@sha256:15c6aecccbbacba1f04a80d0076380cf6e0207de68c30d17cc58e15161544f5c
    Port:          <none>
    Host Port:     <none>
    Args:
      ceph
      operator
    State:          Running
      Started:      Wed, 15 Sep 2021 21:16:36 -0500
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  256Mi
    Requests:
      cpu:     100m
      memory:  128Mi
    Environment:
      ROOK_CURRENT_NAMESPACE_ONLY:               false
      ROOK_HOSTPATH_REQUIRES_PRIVILEGED:         false
      ROOK_LOG_LEVEL:                            INFO
      ROOK_ENABLE_SELINUX_RELABELING:            true
      ROOK_DISABLE_DEVICE_HOTPLUG:               false
      ROOK_CSI_ENABLE_RBD:                       true
      ROOK_CSI_ENABLE_CEPHFS:                    true
      CSI_ENABLE_CEPHFS_SNAPSHOTTER:             true
      CSI_ENABLE_RBD_SNAPSHOTTER:                true
      CSI_PLUGIN_PRIORITY_CLASSNAME:
      CSI_PROVISIONER_PRIORITY_CLASSNAME:
      CSI_ENABLE_OMAP_GENERATOR:                 false
      CSI_ENABLE_VOLUME_REPLICATION:             false
      CSI_RBD_FSGROUPPOLICY:                     ReadWriteOnceWithFSType
      CSI_CEPHFS_FSGROUPPOLICY:                  None
      ROOK_CSI_ENABLE_GRPC_METRICS:              false
      CSI_PLUGIN_TOLERATIONS:                    - operator: Exists
      CSI_FORCE_CEPHFS_KERNEL_CLIENT:            true
      ROOK_ENABLE_FLEX_DRIVER:                   false
      ROOK_ENABLE_DISCOVERY_DAEMON:              false
      ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS:        15
      ROOK_OBC_WATCH_OPERATOR_NAMESPACE:         true
      NODE_NAME:                                  (v1:spec.nodeName)
      POD_NAME:                                  rook-ceph-operator-65794c6857-hp866 (v1:metadata.name)
      POD_NAMESPACE:                             rook-ceph (v1:metadata.namespace)
      ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS:  5
    Mounts:
      /etc/ceph from default-config-dir (rw)
      /var/lib/rook from rook-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-system-token-fnxtl (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  rook-config:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  default-config-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  rook-ceph-system-token-fnxtl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rook-ceph-system-token-fnxtl
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>
❯ oc describe pod/csi-cephfsplugin-pdtf8 -n rook-ceph
Name:         csi-cephfsplugin-pdtf8
Namespace:    rook-ceph
Priority:     0
Node:         kitkat/192.168.1.70
Start Time:   Thu, 16 Sep 2021 13:25:23 -0500
Labels:       app=csi-cephfsplugin
              contains=csi-cephfsplugin-metrics
              controller-revision-hash=69b9b45545
              pod-template-generation=2
Annotations:  <none>
Status:       Running
IP:           192.168.1.70
IPs:
  IP:           192.168.1.70
Controlled By:  DaemonSet/csi-cephfsplugin
Containers:
  driver-registrar:
    Container ID:  cri-o://29be84cf6356bb9a1c5247876ee0c9350e665f940cb4322157d76c9930e42975
    Image:         k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0
    Image ID:      k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=0
      --csi-address=/csi/csi.sock
      --kubelet-registration-path=/var/lib/kubelet/plugins/rook-ceph.cephfs.csi.ceph.com/csi.sock
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 16 Sep 2021 17:08:07 -0500
      Finished:     Thu, 16 Sep 2021 17:08:09 -0500
    Ready:          False
    Restart Count:  48
    Environment:
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from plugin-dir (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-csi-cephfs-plugin-sa-token-2fr5n (ro)
  csi-cephfsplugin:
    Container ID:  cri-o://35c9b4961c4e8fc4e79cbb841d8ae20b3d34df4e3604ab2183eca22f6a042379
    Image:         quay.io/cephcsi/cephcsi:v3.4.0
    Image ID:      quay.io/cephcsi/cephcsi@sha256:1a6b395ffed6e51b7b73a87694690283d91b782b56cfaafee447f263c68a55d9
    Port:          <none>
    Host Port:     <none>
    Args:
      --nodeid=$(NODE_ID)
      --type=cephfs
      --endpoint=$(CSI_ENDPOINT)
      --v=0
      --nodeserver=true
      --drivername=rook-ceph.cephfs.csi.ceph.com
      --metricsport=9091
      --forcecephkernelclient=true
      --metricspath=/metrics
      --enablegrpcmetrics=false
    State:          Running
      Started:      Thu, 16 Sep 2021 13:25:24 -0500
    Ready:          True
    Restart Count:  0
    Environment:
      POD_IP:          (v1:status.podIP)
      NODE_ID:         (v1:spec.nodeName)
      POD_NAMESPACE:  rook-ceph (v1:metadata.namespace)
      CSI_ENDPOINT:   unix:///csi/csi.sock
    Mounts:
      /csi from plugin-dir (rw)
      /dev from host-dev (rw)
      /etc/ceph-csi-config/ from ceph-csi-config (rw)
      /lib/modules from lib-modules (ro)
      /run/mount from host-run-mount (rw)
      /sys from host-sys (rw)
      /tmp/csi/keys from keys-tmp-dir (rw)
      /var/lib/kubelet/plugins from csi-plugins-dir (rw)
      /var/lib/kubelet/pods from pods-mount-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-csi-cephfs-plugin-sa-token-2fr5n (ro)
  liveness-prometheus:
    Container ID:  cri-o://c753aae4d645a33bda498d922c7d2395dfcce691018069b24a4647574e2aae90
    Image:         quay.io/cephcsi/cephcsi:v3.4.0
    Image ID:      quay.io/cephcsi/cephcsi@sha256:1a6b395ffed6e51b7b73a87694690283d91b782b56cfaafee447f263c68a55d9
    Port:          <none>
    Host Port:     <none>
    Args:
      --type=liveness
      --endpoint=$(CSI_ENDPOINT)
      --metricsport=9081
      --metricspath=/metrics
      --polltime=60s
      --timeout=3s
    State:          Running
      Started:      Thu, 16 Sep 2021 13:25:25 -0500
    Ready:          True
    Restart Count:  0
    Environment:
      CSI_ENDPOINT:  unix:///csi/csi.sock
      POD_IP:         (v1:status.podIP)
    Mounts:
      /csi from plugin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-csi-cephfs-plugin-sa-token-2fr5n (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  plugin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/rook-ceph.cephfs.csi.ceph.com/
    HostPathType:  DirectoryOrCreate
  csi-plugins-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins
    HostPathType:  Directory
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry/
    HostPathType:  Directory
  pods-mount-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:  Directory
  host-sys:
    Type:          HostPath (bare host directory volume)
    Path:          /sys
    HostPathType:
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:
  host-dev:
    Type:          HostPath (bare host directory volume)
    Path:          /dev
    HostPathType:
  ceph-csi-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      rook-ceph-csi-config
    Optional:  false
  keys-tmp-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  host-run-mount:
    Type:          HostPath (bare host directory volume)
    Path:          /run/mount
    HostPathType:
  rook-csi-cephfs-plugin-sa-token-2fr5n:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rook-csi-cephfs-plugin-sa-token-2fr5n
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     op=Exists
                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason   Age                       From     Message
  ----     ------   ----                      ----     -------
  Warning  BackOff  2m59s (x1005 over 3h43m)  kubelet  Back-off restarting failed container
❯ oc logs -n rook-ceph pod/csi-rbdplugin-n8xxw driver-registrar
I0916 21:50:40.309156  597518 main.go:113] Version: v2.2.0
I0916 21:50:40.319548  597518 node_register.go:52] Starting Registration Server at: /registration/rook-ceph.rbd.csi.ceph.com-reg.sock
I0916 21:50:40.320633  597518 node_register.go:61] Registration Server started at: /registration/rook-ceph.rbd.csi.ceph.com-reg.sock
I0916 21:50:40.321406  597518 node_register.go:83] Skipping healthz server because HTTP endpoint is set to: ""
I0916 21:50:41.497245  597518 main.go:80] Received GetInfo call: &InfoRequest{}
I0916 21:50:42.158555  597518 main.go:90] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:false,Error:RegisterPlugin error -- plugin registration failed with err: error updating Node object with CSI driver node info: error updating node: timed out waiting for the condition; caused by: resource name may not be empty,}
E0916 21:50:42.158677  597518 main.go:92] Registration process failed with error: RegisterPlugin error -- plugin registration failed with err: error updating Node object with CSI driver node info: error updating node: timed out waiting for the condition; caused by: resource name may not be empty, restarting registration container.
❯ oc logs -n rook-ceph pod/csi-cephfsplugin-pdtf8 driver-registrar
I0916 21:47:24.299674       1 main.go:113] Version: v2.2.0
I0916 21:47:24.324566       1 node_register.go:52] Starting Registration Server at: /registration/rook-ceph.cephfs.csi.ceph.com-reg.sock
I0916 21:47:24.325553       1 node_register.go:61] Registration Server started at: /registration/rook-ceph.cephfs.csi.ceph.com-reg.sock
I0916 21:47:24.325958       1 node_register.go:83] Skipping healthz server because HTTP endpoint is set to: ""
I0916 21:47:25.453933       1 main.go:80] Received GetInfo call: &InfoRequest{}
I0916 21:47:26.105094       1 main.go:90] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:false,Error:RegisterPlugin error -- plugin registration failed with err: error updating Node object with CSI driver node info: error updating node: timed out waiting for the condition; caused by: resource name may not be empty,}
E0916 21:47:26.105220       1 main.go:92] Registration process failed with error: RegisterPlugin error -- plugin registration failed with err: error updating Node object with CSI driver node info: error updating node: timed out waiting for the condition; caused by: resource name may not be empty, restarting registration container.

I was unsure of whether this should be opened here or in https://github.com/kubernetes-csi/node-driver-registrar so I'm happy to reopen it there if needed.

Rakshith-R commented 3 years ago

@jbpratt try applying the scc from here https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/operator-openshift.yaml.

jbpratt commented 3 years ago

thanks for the reply @Rakshith-R , it seems to still be happening. Let me reset things to a fresh state and try manually installing via the manifests. By chance, do you know what resource the error resource name may not be empty is referring to?

jbpratt commented 3 years ago

Hmm, seems I'm still getting the same results on a slightly different environment (microshift on Fedora34 server aarch64, not in podman) and using the files from clusters/examples/kubernetes/ceph/. The operator starts without issue, but then things fail when trying to create a cluster. cc @rootfs as this was working for https://github.com/redhat-et/ushift-workload so this very well may be a small mistake on my part.

examples/kubernetes/ceph
❯ oc create -f crds.yaml -f common.yaml -f operator-openshift.yaml
customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephclients.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystemmirrors.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectrealms.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectzonegroups.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectzones.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephrbdmirrors.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/objectbucketclaims.objectbucket.io created
customresourcedefinition.apiextensions.k8s.io/objectbuckets.objectbucket.io created
customresourcedefinition.apiextensions.k8s.io/volumereplicationclasses.replication.storage.openshift.io created
customresourcedefinition.apiextensions.k8s.io/volumereplications.replication.storage.openshift.io created
customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created
namespace/rook-ceph created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-object-bucket created
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
clusterrole.rbac.authorization.k8s.io/rook-ceph-system created
role.rbac.authorization.k8s.io/rook-ceph-system created
clusterrole.rbac.authorization.k8s.io/rook-ceph-global created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
clusterrole.rbac.authorization.k8s.io/rook-ceph-object-bucket created
serviceaccount/rook-ceph-system created
rolebinding.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created
serviceaccount/rook-ceph-osd created
serviceaccount/rook-ceph-mgr created
serviceaccount/rook-ceph-cmd-reporter created
role.rbac.authorization.k8s.io/rook-ceph-osd created
clusterrole.rbac.authorization.k8s.io/rook-ceph-osd created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created
role.rbac.authorization.k8s.io/rook-ceph-mgr created
role.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created
podsecuritypolicy.policy/00-rook-privileged created
clusterrole.rbac.authorization.k8s.io/psp:rook created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-default-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter-psp created
serviceaccount/rook-csi-cephfs-plugin-sa created
serviceaccount/rook-csi-cephfs-provisioner-sa created
role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created
clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-plugin-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-provisioner-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created
serviceaccount/rook-csi-rbd-plugin-sa created
serviceaccount/rook-csi-rbd-provisioner-sa created
role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-plugin-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-provisioner-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created
role.rbac.authorization.k8s.io/rook-ceph-purge-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-purge-osd created
serviceaccount/rook-ceph-purge-osd created
securitycontextconstraints.security.openshift.io/rook-ceph created
securitycontextconstraints.security.openshift.io/rook-ceph-csi created
configmap/rook-ceph-operator-config created
deployment.apps/rook-ceph-operator created

examples/kubernetes/ceph
❯ oc create -f https://raw.githubusercontent.com/redhat-et/ushift-workload/master/rook/cluster-test.yaml
configmap/rook-config-override created
cephcluster.ceph.rook.io/my-cluster created

examples/kubernetes/ceph
❯ oc logs pod/csi-cephfsplugin-wqd67 -n rook-ceph driver-registrar
I0919 11:18:04.017245       1 main.go:113] Version: v2.2.0
I0919 11:18:04.025639       1 node_register.go:52] Starting Registration Server at: /registration/rook-ceph.cephfs.csi.ceph.com-reg.sock
I0919 11:18:04.026555       1 node_register.go:61] Registration Server started at: /registration/rook-ceph.cephfs.csi.ceph.com-reg.sock
I0919 11:18:04.027406       1 node_register.go:83] Skipping healthz server because HTTP endpoint is set to: ""
I0919 11:18:05.850063       1 main.go:80] Received GetInfo call: &InfoRequest{}
I0919 11:18:06.520343       1 main.go:90] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:false,Error:RegisterPlugin error -- plugin registration failed with err: error updating Node object with CSI driver node info: error updating node: timed out waiting for the condition; caused by: resource name may not be empty,}
E0919 11:18:06.520484       1 main.go:92] Registration process failed with error: RegisterPlugin error -- plugin registration failed with err: error updating Node object with CSI driver node info: error updating node: timed out waiting for the condition; caused by: resource name may not be empty, restarting registration container.
yati1998 commented 3 years ago

@jbpratt are you still facing any issue or is it resolved?

jbpratt commented 3 years ago

Hi @yati1998, I believe I am, I can try to reproduce it again this afternoon to be sure. I haven't found a fix at least.

jbpratt commented 2 years ago

Going ahead and closing this out since I don't have time to reproduce it. If someone else can, or I can at a later date, we can re-open this or a new issue :smiley_cat: