GoogleCloudPlatform / gcs-fuse-csi-driver

The Google Cloud Storage FUSE Container Storage Interface (CSI) Plugin.
Apache License 2.0
117 stars 29 forks source link

Driver Not Working - Dynamic provisioning is not supported #8

Closed vsoch closed 1 year ago

vsoch commented 1 year ago

Hi again! I am trying to debug why my pods are stuck in pending. Here is a bunch of logs / configs that might shed some light.

Driver logs seem OK:

$ kubectl logs -n gcs-fuse-csi-driver gcs-fuse-csi-driver-webhook-569899b854-w9sm7 -f
I0407 21:56:37.409773       1 main.go:54] Running Google Cloud Storage FUSE CSI driver admission webhook version v0.1.2-0-gd9e3bdd, sidecar container image jiaxun/gcs-fuse-csi-driver-sidecar-mounter:v0.1.2-0-gd9e3bdd
I0407 21:56:37.409994       1 metrics.go:89] Emit component_version metric with value v999.999.999
I0407 21:56:37.410024       1 main.go:71] Setting up manager.
I0407 21:56:37.410320       1 metrics.go:68] Metric server listening at ":22032"
I0407 21:56:38.216399       1 main.go:90] Setting up webhook server.
I0407 21:56:38.216461       1 main.go:95] Registering webhooks to the webhook server.
I0407 21:56:38.216703       1 main.go:103] Starting manager.
I0407 22:25:54.297692       1 mutatingwebhook.go:109] mutating Pod: Name "", GenerateName "flux-sample-0-", Namespace "flux-operator", CPU limit "250m", memory limit "256Mi", ephemeral storage limit "5Gi"
I0407 22:25:54.404581       1 mutatingwebhook.go:109] mutating Pod: Name "", GenerateName "flux-sample-1-", Namespace "flux-operator", CPU limit "250m", memory limit "256Mi", ephemeral storage limit "5Gi"

I'm not sure if this message about "cannot create temp dir" with the read only file system is the bug...

$ kubectl logs -n gcs-fuse-csi-driver gcsfusecsi-node-bxxwm -c gcs-fuse-csi-driver 
I0407 21:56:31.496608       1 clientset.go:51] using in-cluster kubeconfig
I0407 21:56:31.499001       1 metadata.go:51] got empty identityPool, constructing the identityPool using projectID
I0407 21:56:31.499026       1 metadata.go:56] got empty identityProvider, constructing the identityProvider using the gke-metadata-server flags
I0407 21:56:31.510894       1 mount_linux.go:275] Cannot create temp dir to detect safe 'not mounted' behavior: mkdir /tmp/kubelet-detect-safe-umount3354328308: read-only file system
I0407 21:56:31.510952       1 gcs_fuse_driver.go:110] Enabling volume access mode: SINGLE_NODE_WRITER
I0407 21:56:31.510985       1 gcs_fuse_driver.go:110] Enabling volume access mode: SINGLE_NODE_READER_ONLY
I0407 21:56:31.510995       1 gcs_fuse_driver.go:110] Enabling volume access mode: MULTI_NODE_READER_ONLY
I0407 21:56:31.511001       1 gcs_fuse_driver.go:110] Enabling volume access mode: MULTI_NODE_SINGLE_WRITER
I0407 21:56:31.511037       1 gcs_fuse_driver.go:110] Enabling volume access mode: MULTI_NODE_MULTI_WRITER
I0407 21:56:31.511171       1 main.go:112] Running Google Cloud Storage FUSE CSI driver version v0.1.2-0-gd9e3bdd, sidecar container image jiaxun/gcs-fuse-csi-driver-sidecar-mounter:v0.1.2-0-gd9e3bdd
I0407 21:56:31.511187       1 gcs_fuse_driver.go:190] Running driver: gcsfuse.csi.storage.gke.io
I0407 21:56:31.511334       1 server.go:75] Start listening with scheme unix, addr /csi/csi.sock
I0407 21:56:31.511620       1 server.go:97] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
I0407 21:56:38.483150       1 utils.go:82] /csi.v1.Identity/GetPluginInfo called with request: 
I0407 21:56:38.483173       1 utils.go:87] /csi.v1.Identity/GetPluginInfo succeeded with response: name:"gcsfuse.csi.storage.gke.io" vendor_version:"v0.1.2-0-gd9e3bdd" 
I0407 21:56:39.164273       1 utils.go:82] /csi.v1.Node/NodeGetInfo called with request: 
I0407 21:56:39.164345       1 utils.go:87] /csi.v1.Node/NodeGetInfo succeeded with response: node_id:"gke-flux-cluster-default-pool-a53eb99b-55kb" 

My pod and the sidecar container created for it have no logs (in pending):

$ kubectl logs -n flux-operator flux-sample-0-x265q -c flux-sample -f
$ kubectl logs -n flux-operator flux-sample-0-x265q -c gke-gcsfuse-sidecar -f

Stuck in pending:

kubectl get -n flux-operator pods
NAME                         READY   STATUS      RESTARTS   AGE
flux-sample-0-x265q          0/2     Pending     0          4m47s
flux-sample-1-nwjwx          0/2     Pending     0          4m47s
flux-sample-cert-generator   0/1     Completed   0          4m47s

The PVC seems OK, it's waiting:

$ kubectl get -n flux-operator pvc
NAME   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS     AGE
data   Pending                                      gcs-fuse-class   18m
$ kubectl describe -n flux-operator pvc
Name:          data
Namespace:     flux-operator
StorageClass:  gcs-fuse-class
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: gcsfuse.csi.storage.gke.io
               volume.kubernetes.io/selected-node: gke-flux-cluster-default-pool-a53eb99b-55kb
               volume.kubernetes.io/storage-provisioner: gcsfuse.csi.storage.gke.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       flux-sample-0-x265q
               flux-sample-1-nwjwx
Events:
  Type    Reason                Age                    From                         Message
  ----    ------                ----                   ----                         -------
  Normal  WaitForFirstConsumer  8m32s (x42 over 18m)   persistentvolume-controller  waiting for first consumer to be created before binding
  Normal  ExternalProvisioning  3m32s (x21 over 8m2s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "gcsfuse.csi.storage.gke.io" or manually created by system administrator

This is how I created it:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: data
  namespace: flux-operator
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: gcs-fuse-class

Here is a pending pod:

$ kubectl describe -n flux-operator pods flux-sample-0-x265q 
Name:           flux-sample-0-x265q
Namespace:      flux-operator
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/name=flux-sample
                controller-uid=a4528f66-7a18-45ba-866a-5e9bfecf7a48
                job-name=flux-sample
                namespace=flux-operator
Annotations:    batch.kubernetes.io/job-completion-index: 0
                container.seccomp.security.alpha.kubernetes.io/gke-gcsfuse-sidecar: runtime/default
                gke-gcsfuse/volumes: true
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  Job/flux-sample
Containers:
  gke-gcsfuse-sidecar:
    Image:      jiaxun/gcs-fuse-csi-driver-sidecar-mounter:v0.1.2-0-gd9e3bdd
    Port:       <none>
    Host Port:  <none>
    Args:
      --v=5
    Limits:
      cpu:                250m
      ephemeral-storage:  5Gi
      memory:             256Mi
    Requests:
      cpu:                250m
      ephemeral-storage:  5Gi
      memory:             256Mi
    Environment:          <none>
    Mounts:
      /gcsfuse-tmp from gke-gcsfuse-tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qxmjl (ro)
  flux-sample:
    Image:      ghcr.io/rse-ops/atacseq:app-latest
    Port:       5000/TCP
    Host Port:  0/TCP
    Command:
      /bin/bash
      /flux_operator/wait-0.sh

    Environment:
      JOB_COMPLETION_INDEX:   (v1:metadata.annotations['batch.kubernetes.io/job-completion-index'])
    Mounts:
      /etc/flux/config from flux-sample-flux-config (ro)
      /flux_operator/ from flux-sample-entrypoint (ro)
      /mnt/curve/ from flux-sample-curve-mount (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qxmjl (ro)
      /workflow from data (rw)
Volumes:
  gke-gcsfuse-tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  flux-sample-flux-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      flux-sample-flux-config
    Optional:  false
  flux-sample-entrypoint:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      flux-sample-entrypoint
    Optional:  false
  flux-sample-curve-mount:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      flux-sample-curve-mount
    Optional:  false
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data
    ReadOnly:   false
  kube-api-access-qxmjl:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

The CSIDriver

$ kubectl describe -n gcs-fuse-csi-driver CSIDriver gcsfuse.csi.storage.gke.io 
Name:         gcsfuse.csi.storage.gke.io
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  storage.k8s.io/v1
Kind:         CSIDriver
Metadata:
  Creation Timestamp:  2023-04-07T21:56:26Z
  Managed Fields:
    API Version:  storage.k8s.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        f:attachRequired:
        f:fsGroupPolicy:
        f:podInfoOnMount:
        f:requiresRepublish:
        f:storageCapacity:
        f:tokenRequests:
        f:volumeLifecycleModes:
          .:
          v:"Ephemeral":
          v:"Persistent":
    Manager:         kubectl-client-side-apply
    Operation:       Update
    Time:            2023-04-07T21:56:26Z
  Resource Version:  19691
  UID:               5c8396d0-e4bc-41ff-865b-27cd71ad0c02
Spec:
  Attach Required:     false
  Fs Group Policy:     ReadWriteOnceWithFSType
  Pod Info On Mount:   true
  Requires Republish:  true
  Storage Capacity:    false
  Token Requests:
    Audience:  llnl-flux.svc.id.goog
  Volume Lifecycle Modes:
    Persistent
    Ephemeral
Events:  <none>

The deployment

r$ kubectl describe -n gcs-fuse-csi-driver Deployment
Name:                   gcs-fuse-csi-driver-webhook
Namespace:              gcs-fuse-csi-driver
CreationTimestamp:      Fri, 07 Apr 2023 15:56:25 -0600
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=gcs-fuse-csi-driver-webhook
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:       app=gcs-fuse-csi-driver-webhook
  Annotations:  seccomp.security.alpha.kubernetes.io/pod: runtime/default
  Containers:
   gcs-fuse-csi-driver-webhook:
    Image:       jiaxun/gcs-fuse-csi-driver-webhook:v0.1.2-0-gd9e3bdd
    Ports:       22030/TCP, 22031/TCP, 22032/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Args:
      --sidecar-cpu-limit=250m
      --sidecar-memory-limit=256Mi
      --sidecar-ephemeral-storage-limit=5Gi
      --sidecar-image=$(SIDECAR_IMAGE)
      --sidecar-image-pull-policy=$(SIDECAR_IMAGE_PULL_POLICY)
      --cert-dir=/etc/tls-certs
      --port=22030
      --health-probe-bind-address=:22031
      --http-endpoint=:22032
    Limits:
      cpu:     200m
      memory:  200Mi
    Requests:
      cpu:     10m
      memory:  10Mi
    Liveness:  http-get http://:22031/readyz delay=30s timeout=15s period=30s #success=1 #failure=3
    Environment:
      SIDECAR_IMAGE_PULL_POLICY:  IfNotPresent
      SIDECAR_IMAGE:              <set to the key 'sidecar-image' of config map 'gcsfusecsi-image-config'>  Optional: false
      GKE_GCSFUSECSI_VERSION:     v999.999.999
    Mounts:
      /etc/tls-certs from gcs-fuse-csi-driver-webhook-certs (ro)
  Volumes:
   gcs-fuse-csi-driver-webhook-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  gcs-fuse-csi-driver-webhook-secret
    Optional:    false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   gcs-fuse-csi-driver-webhook-569899b854 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  42m   deployment-controller  Scaled up replica set gcs-fuse-csi-driver-webhook-569899b854 to 1

Seems like there might be failures for the Liveness probe? DaemonSet looks OK

$ kubectl describe -n gcs-fuse-csi-driver DaemonSet
Name:           gcsfusecsi-node
Selector:       k8s-app=gcs-fuse-csi-driver
Node-Selector:  kubernetes.io/os=linux
Labels:         <none>
Annotations:    deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 4
Current Number of Nodes Scheduled: 4
Number of Nodes Scheduled with Up-to-date Pods: 4
Number of Nodes Scheduled with Available Pods: 4
Number of Nodes Misscheduled: 0
Pods Status:  4 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           k8s-app=gcs-fuse-csi-driver
  Annotations:      seccomp.security.alpha.kubernetes.io/pod: runtime/default
  Service Account:  gcsfusecsi-node-sa
  Containers:
   gcs-fuse-csi-driver:
    Image:      jiaxun/gcs-fuse-csi-driver:v0.1.2-0-gd9e3bdd
    Port:       <none>
    Host Port:  <none>
    Args:
      --v=5
      --endpoint=unix:/csi/csi.sock
      --nodeid=$(KUBE_NODE_NAME)
      --node=true
      --sidecar-image=$(SIDECAR_IMAGE)
    Limits:
      cpu:     200m
      memory:  200Mi
    Requests:
      cpu:     5m
      memory:  10Mi
    Environment:
      KUBE_NODE_NAME:   (v1:spec.nodeName)
      SIDECAR_IMAGE:   <set to the key 'sidecar-image' of config map 'gcsfusecsi-image-config'>  Optional: false
    Mounts:
      /csi from socket-dir (rw)
      /var/lib/kubelet/pods from kubelet-dir (rw)
   csi-driver-registrar:
    Image:      registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0
    Port:       <none>
    Host Port:  <none>
    Args:
      --v=5
      --csi-address=/csi/csi.sock
      --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
    Limits:
      cpu:     50m
      memory:  100Mi
    Requests:
      cpu:     10m
      memory:  10Mi
    Environment:
      DRIVER_REG_SOCK_PATH:  /var/lib/kubelet/plugins/gcsfuse.csi.storage.gke.io/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /registration from registration-dir (rw)
  Volumes:
   registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry/
    HostPathType:  Directory
   kubelet-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods/
    HostPathType:  Directory
   socket-dir:
    Type:               HostPath (bare host directory volume)
    Path:               /var/lib/kubelet/plugins/gcsfuse.csi.storage.gke.io/
    HostPathType:       DirectoryOrCreate
  Priority Class Name:  csi-gcp-gcs-node
Events:
  Type    Reason            Age   From                  Message
  ----    ------            ----  ----                  -------
  Normal  SuccessfulCreate  43m   daemonset-controller  Created pod: gcsfusecsi-node-bfwd4
  Normal  SuccessfulCreate  43m   daemonset-controller  Created pod: gcsfusecsi-node-bxxwm
  Normal  SuccessfulCreate  43m   daemonset-controller  Created pod: gcsfusecsi-node-m9d6n
  Normal  SuccessfulCreate  43m   daemonset-controller  Created pod: gcsfusecsi-node-tr6wt

and pods seem OK

$ kubectl describe -n gcs-fuse-csi-driver Pods
Name:         gcs-fuse-csi-driver-webhook-569899b854-w9sm7
Namespace:    gcs-fuse-csi-driver
Priority:     0
Node:         gke-flux-cluster-default-pool-a53eb99b-6ns7/10.128.0.26
Start Time:   Fri, 07 Apr 2023 15:56:26 -0600
Labels:       app=gcs-fuse-csi-driver-webhook
              pod-template-hash=569899b854
Annotations:  cni.projectcalico.org/containerID: c9a83bf8d5268b69fe0cccc92ae85980eda576f1d4b0a948afb66a720e3da1ce
              cni.projectcalico.org/podIP: 10.116.0.5/32
              cni.projectcalico.org/podIPs: 10.116.0.5/32
              seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:       Running
IP:           10.116.0.5
IPs:
  IP:           10.116.0.5
Controlled By:  ReplicaSet/gcs-fuse-csi-driver-webhook-569899b854
Containers:
  gcs-fuse-csi-driver-webhook:
    Container ID:  containerd://df9919203c21747503e5c004ea9a1670115838ef5e5a6dfee03860e6aefd6e06
    Image:         jiaxun/gcs-fuse-csi-driver-webhook:v0.1.2-0-gd9e3bdd
    Image ID:      docker.io/jiaxun/gcs-fuse-csi-driver-webhook@sha256:bb1967c15ee8fcebf8c4c020121497e58f43f08c75066792acff0e1841b0ee34
    Ports:         22030/TCP, 22031/TCP, 22032/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Args:
      --sidecar-cpu-limit=250m
      --sidecar-memory-limit=256Mi
      --sidecar-ephemeral-storage-limit=5Gi
      --sidecar-image=$(SIDECAR_IMAGE)
      --sidecar-image-pull-policy=$(SIDECAR_IMAGE_PULL_POLICY)
      --cert-dir=/etc/tls-certs
      --port=22030
      --health-probe-bind-address=:22031
      --http-endpoint=:22032
    State:          Running
      Started:      Fri, 07 Apr 2023 15:56:37 -0600
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  200Mi
    Requests:
      cpu:     10m
      memory:  10Mi
    Liveness:  http-get http://:22031/readyz delay=30s timeout=15s period=30s #success=1 #failure=3
    Environment:
      SIDECAR_IMAGE_PULL_POLICY:  IfNotPresent
      SIDECAR_IMAGE:              <set to the key 'sidecar-image' of config map 'gcsfusecsi-image-config'>  Optional: false
      GKE_GCSFUSECSI_VERSION:     v999.999.999
    Mounts:
      /etc/tls-certs from gcs-fuse-csi-driver-webhook-certs (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jnvb4 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  gcs-fuse-csi-driver-webhook-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  gcs-fuse-csi-driver-webhook-secret
    Optional:    false
  kube-api-access-jnvb4:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                From               Message
  ----     ------       ----               ----               -------
  Normal   Scheduled    44m                default-scheduler  Successfully assigned gcs-fuse-csi-driver/gcs-fuse-csi-driver-webhook-569899b854-w9sm7 to gke-flux-cluster-default-pool-a53eb99b-6ns7
  Warning  FailedMount  44m (x3 over 44m)  kubelet            MountVolume.SetUp failed for volume "gcs-fuse-csi-driver-webhook-certs" : secret "gcs-fuse-csi-driver-webhook-secret" not found
  Normal   Pulling      44m                kubelet            Pulling image "jiaxun/gcs-fuse-csi-driver-webhook:v0.1.2-0-gd9e3bdd"
  Normal   Pulled       43m                kubelet            Successfully pulled image "jiaxun/gcs-fuse-csi-driver-webhook:v0.1.2-0-gd9e3bdd" in 2.275865045s
  Normal   Created      43m                kubelet            Created container gcs-fuse-csi-driver-webhook
  Normal   Started      43m                kubelet            Started container gcs-fuse-csi-driver-webhook

Name:                 gcsfusecsi-node-bfwd4
Namespace:            gcs-fuse-csi-driver
Priority:             900001000
Priority Class Name:  csi-gcp-gcs-node
Node:                 gke-flux-cluster-default-pool-a53eb99b-dnp1/10.128.0.27
Start Time:           Fri, 07 Apr 2023 15:56:26 -0600
Labels:               controller-revision-hash=f6d8489cc
                      k8s-app=gcs-fuse-csi-driver
                      pod-template-generation=1
Annotations:          cni.projectcalico.org/containerID: 41ef22ee042ef73882b51a7c35efec8e61e93aaa6b434382d25fe68492bc7369
                      cni.projectcalico.org/podIP: 10.116.1.13/32
                      cni.projectcalico.org/podIPs: 10.116.1.13/32
                      seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:               Running
IP:                   10.116.1.13
IPs:
  IP:           10.116.1.13
Controlled By:  DaemonSet/gcsfusecsi-node
Containers:
  gcs-fuse-csi-driver:
    Container ID:  containerd://8922ab1437e3be246c157d2d641b53ed1bf378b466a6664e7fa180e9c1fcb598
    Image:         jiaxun/gcs-fuse-csi-driver:v0.1.2-0-gd9e3bdd
    Image ID:      docker.io/jiaxun/gcs-fuse-csi-driver@sha256:1303895a8e8ab4a68e8d00ff089b86c7a43360ee3e57fc10a8f62c2e5697dac2
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --endpoint=unix:/csi/csi.sock
      --nodeid=$(KUBE_NODE_NAME)
      --node=true
      --sidecar-image=$(SIDECAR_IMAGE)
    State:          Running
      Started:      Fri, 07 Apr 2023 15:56:31 -0600
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  200Mi
    Requests:
      cpu:     5m
      memory:  10Mi
    Environment:
      KUBE_NODE_NAME:   (v1:spec.nodeName)
      SIDECAR_IMAGE:   <set to the key 'sidecar-image' of config map 'gcsfusecsi-image-config'>  Optional: false
    Mounts:
      /csi from socket-dir (rw)
      /var/lib/kubelet/pods from kubelet-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sfpk7 (ro)
  csi-driver-registrar:
    Container ID:  containerd://001e963ccd561e5dfdc97b09d6060b4786a1ad6ef8f64c6f7d0dde4e50c193a2
    Image:         registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0
    Image ID:      registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:4a4cae5118c4404e35d66059346b7fa0835d7e6319ff45ed73f4bba335cf5183
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=/csi/csi.sock
      --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
    State:          Running
      Started:      Fri, 07 Apr 2023 15:56:39 -0600
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     50m
      memory:  100Mi
    Requests:
      cpu:     10m
      memory:  10Mi
    Environment:
      DRIVER_REG_SOCK_PATH:  /var/lib/kubelet/plugins/gcsfuse.csi.storage.gke.io/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sfpk7 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry/
    HostPathType:  Directory
  kubelet-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods/
    HostPathType:  Directory
  socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/gcsfuse.csi.storage.gke.io/
    HostPathType:  DirectoryOrCreate
  kube-api-access-sfpk7:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  44m   default-scheduler  Successfully assigned gcs-fuse-csi-driver/gcsfusecsi-node-bfwd4 to gke-flux-cluster-default-pool-a53eb99b-dnp1
  Normal  Pulling    44m   kubelet            Pulling image "jiaxun/gcs-fuse-csi-driver:v0.1.2-0-gd9e3bdd"
  Normal  Pulled     44m   kubelet            Successfully pulled image "jiaxun/gcs-fuse-csi-driver:v0.1.2-0-gd9e3bdd" in 3.417523579s
  Normal  Created    44m   kubelet            Created container gcs-fuse-csi-driver
  Normal  Started    44m   kubelet            Started container gcs-fuse-csi-driver
  Normal  Pulled     44m   kubelet            Container image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0" already present on machine
  Normal  Created    44m   kubelet            Created container csi-driver-registrar
  Normal  Started    43m   kubelet            Started container csi-driver-registrar

Name:                 gcsfusecsi-node-bxxwm
Namespace:            gcs-fuse-csi-driver
Priority:             900001000
Priority Class Name:  csi-gcp-gcs-node
Node:                 gke-flux-cluster-default-pool-a53eb99b-55kb/10.128.0.29
Start Time:           Fri, 07 Apr 2023 15:56:26 -0600
Labels:               controller-revision-hash=f6d8489cc
                      k8s-app=gcs-fuse-csi-driver
                      pod-template-generation=1
Annotations:          cni.projectcalico.org/containerID: f2b1a4d72a1d5bbe01eb309c156208da4ec3746d9a37a48f993624c34ba43815
                      cni.projectcalico.org/podIP: 10.116.3.5/32
                      cni.projectcalico.org/podIPs: 10.116.3.5/32
                      seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:               Running
IP:                   10.116.3.5
IPs:
  IP:           10.116.3.5
Controlled By:  DaemonSet/gcsfusecsi-node
Containers:
  gcs-fuse-csi-driver:
    Container ID:  containerd://e9bbaf689c41c0db529d34ccfb5eadbf62fa1441dc7abdc801685780789beb77
    Image:         jiaxun/gcs-fuse-csi-driver:v0.1.2-0-gd9e3bdd
    Image ID:      docker.io/jiaxun/gcs-fuse-csi-driver@sha256:1303895a8e8ab4a68e8d00ff089b86c7a43360ee3e57fc10a8f62c2e5697dac2
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --endpoint=unix:/csi/csi.sock
      --nodeid=$(KUBE_NODE_NAME)
      --node=true
      --sidecar-image=$(SIDECAR_IMAGE)
    State:          Running
      Started:      Fri, 07 Apr 2023 15:56:31 -0600
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  200Mi
    Requests:
      cpu:     5m
      memory:  10Mi
    Environment:
      KUBE_NODE_NAME:   (v1:spec.nodeName)
      SIDECAR_IMAGE:   <set to the key 'sidecar-image' of config map 'gcsfusecsi-image-config'>  Optional: false
    Mounts:
      /csi from socket-dir (rw)
      /var/lib/kubelet/pods from kubelet-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vxfqm (ro)
  csi-driver-registrar:
    Container ID:  containerd://e10e98b586a2ed95e683faf7dafcf55a166defa62791693a25cb08636636c030
    Image:         registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0
    Image ID:      registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:4a4cae5118c4404e35d66059346b7fa0835d7e6319ff45ed73f4bba335cf5183
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=/csi/csi.sock
      --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
    State:          Running
      Started:      Fri, 07 Apr 2023 15:56:38 -0600
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     50m
      memory:  100Mi
    Requests:
      cpu:     10m
      memory:  10Mi
    Environment:
      DRIVER_REG_SOCK_PATH:  /var/lib/kubelet/plugins/gcsfuse.csi.storage.gke.io/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vxfqm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry/
    HostPathType:  Directory
  kubelet-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods/
    HostPathType:  Directory
  socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/gcsfuse.csi.storage.gke.io/
    HostPathType:  DirectoryOrCreate
  kube-api-access-vxfqm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  44m   default-scheduler  Successfully assigned gcs-fuse-csi-driver/gcsfusecsi-node-bxxwm to gke-flux-cluster-default-pool-a53eb99b-55kb
  Normal  Pulling    44m   kubelet            Pulling image "jiaxun/gcs-fuse-csi-driver:v0.1.2-0-gd9e3bdd"
  Normal  Pulled     44m   kubelet            Successfully pulled image "jiaxun/gcs-fuse-csi-driver:v0.1.2-0-gd9e3bdd" in 3.249691796s
  Normal  Created    44m   kubelet            Created container gcs-fuse-csi-driver
  Normal  Started    44m   kubelet            Started container gcs-fuse-csi-driver
  Normal  Pulled     44m   kubelet            Container image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0" already present on machine
  Normal  Created    44m   kubelet            Created container csi-driver-registrar
  Normal  Started    43m   kubelet            Started container csi-driver-registrar

Name:                 gcsfusecsi-node-m9d6n
Namespace:            gcs-fuse-csi-driver
Priority:             900001000
Priority Class Name:  csi-gcp-gcs-node
Node:                 gke-flux-cluster-default-pool-a53eb99b-5zx1/10.128.0.28
Start Time:           Fri, 07 Apr 2023 15:56:27 -0600
Labels:               controller-revision-hash=f6d8489cc
                      k8s-app=gcs-fuse-csi-driver
                      pod-template-generation=1
Annotations:          cni.projectcalico.org/containerID: e309c99a5cdcfd6abb0791447c2b1d7faffca4b7c09f84c2761572ca6d51ce05
                      cni.projectcalico.org/podIP: 10.116.2.6/32
                      cni.projectcalico.org/podIPs: 10.116.2.6/32
                      seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:               Running
IP:                   10.116.2.6
IPs:
  IP:           10.116.2.6
Controlled By:  DaemonSet/gcsfusecsi-node
Containers:
  gcs-fuse-csi-driver:
    Container ID:  containerd://4b12404c5ba83cd2ff9ca604347f9520b0f46160a82358153a1fba54e86ff149
    Image:         jiaxun/gcs-fuse-csi-driver:v0.1.2-0-gd9e3bdd
    Image ID:      docker.io/jiaxun/gcs-fuse-csi-driver@sha256:1303895a8e8ab4a68e8d00ff089b86c7a43360ee3e57fc10a8f62c2e5697dac2
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --endpoint=unix:/csi/csi.sock
      --nodeid=$(KUBE_NODE_NAME)
      --node=true
      --sidecar-image=$(SIDECAR_IMAGE)
    State:          Running
      Started:      Fri, 07 Apr 2023 15:56:31 -0600
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  200Mi
    Requests:
      cpu:     5m
      memory:  10Mi
    Environment:
      KUBE_NODE_NAME:   (v1:spec.nodeName)
      SIDECAR_IMAGE:   <set to the key 'sidecar-image' of config map 'gcsfusecsi-image-config'>  Optional: false
    Mounts:
      /csi from socket-dir (rw)
      /var/lib/kubelet/pods from kubelet-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sz997 (ro)
  csi-driver-registrar:
    Container ID:  containerd://d24c8b71f132e71b90c97668ac20959942ed4132adbfce40db0894ef4b471848
    Image:         registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0
    Image ID:      registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:4a4cae5118c4404e35d66059346b7fa0835d7e6319ff45ed73f4bba335cf5183
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=/csi/csi.sock
      --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
    State:          Running
      Started:      Fri, 07 Apr 2023 15:56:38 -0600
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     50m
      memory:  100Mi
    Requests:
      cpu:     10m
      memory:  10Mi
    Environment:
      DRIVER_REG_SOCK_PATH:  /var/lib/kubelet/plugins/gcsfuse.csi.storage.gke.io/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sz997 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry/
    HostPathType:  Directory
  kubelet-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods/
    HostPathType:  Directory
  socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/gcsfuse.csi.storage.gke.io/
    HostPathType:  DirectoryOrCreate
  kube-api-access-sz997:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  44m   default-scheduler  Successfully assigned gcs-fuse-csi-driver/gcsfusecsi-node-m9d6n to gke-flux-cluster-default-pool-a53eb99b-5zx1
  Normal  Pulling    44m   kubelet            Pulling image "jiaxun/gcs-fuse-csi-driver:v0.1.2-0-gd9e3bdd"
  Normal  Pulled     44m   kubelet            Successfully pulled image "jiaxun/gcs-fuse-csi-driver:v0.1.2-0-gd9e3bdd" in 3.550932087s
  Normal  Created    44m   kubelet            Created container gcs-fuse-csi-driver
  Normal  Started    44m   kubelet            Started container gcs-fuse-csi-driver
  Normal  Pulled     44m   kubelet            Container image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0" already present on machine
  Normal  Created    44m   kubelet            Created container csi-driver-registrar
  Normal  Started    43m   kubelet            Started container csi-driver-registrar

Name:                 gcsfusecsi-node-tr6wt
Namespace:            gcs-fuse-csi-driver
Priority:             900001000
Priority Class Name:  csi-gcp-gcs-node
Node:                 gke-flux-cluster-default-pool-a53eb99b-6ns7/10.128.0.26
Start Time:           Fri, 07 Apr 2023 15:56:27 -0600
Labels:               controller-revision-hash=f6d8489cc
                      k8s-app=gcs-fuse-csi-driver
                      pod-template-generation=1
Annotations:          cni.projectcalico.org/containerID: be243db3ac4de4f6368b4ea4e3f6d6714274d00a997494038874d10149f6a4ee
                      cni.projectcalico.org/podIP: 10.116.0.4/32
                      cni.projectcalico.org/podIPs: 10.116.0.4/32
                      seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:               Running
IP:                   10.116.0.4
IPs:
  IP:           10.116.0.4
Controlled By:  DaemonSet/gcsfusecsi-node
Containers:
  gcs-fuse-csi-driver:
    Container ID:  containerd://0207bc460269dc017d4e6e71ca65dc7c5538e40aa27659bd0f0bdb30e53feead
    Image:         jiaxun/gcs-fuse-csi-driver:v0.1.2-0-gd9e3bdd
    Image ID:      docker.io/jiaxun/gcs-fuse-csi-driver@sha256:1303895a8e8ab4a68e8d00ff089b86c7a43360ee3e57fc10a8f62c2e5697dac2
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --endpoint=unix:/csi/csi.sock
      --nodeid=$(KUBE_NODE_NAME)
      --node=true
      --sidecar-image=$(SIDECAR_IMAGE)
    State:          Running
      Started:      Fri, 07 Apr 2023 15:56:32 -0600
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  200Mi
    Requests:
      cpu:     5m
      memory:  10Mi
    Environment:
      KUBE_NODE_NAME:   (v1:spec.nodeName)
      SIDECAR_IMAGE:   <set to the key 'sidecar-image' of config map 'gcsfusecsi-image-config'>  Optional: false
    Mounts:
      /csi from socket-dir (rw)
      /var/lib/kubelet/pods from kubelet-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fbltq (ro)
  csi-driver-registrar:
    Container ID:  containerd://ff912ed2ba26c778a2ca0b3cb3ef9c05e196206efb466bd11753422518808ebc
    Image:         registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0
    Image ID:      registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:4a4cae5118c4404e35d66059346b7fa0835d7e6319ff45ed73f4bba335cf5183
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=/csi/csi.sock
      --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
    State:          Running
      Started:      Fri, 07 Apr 2023 15:56:40 -0600
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     50m
      memory:  100Mi
    Requests:
      cpu:     10m
      memory:  10Mi
    Environment:
      DRIVER_REG_SOCK_PATH:  /var/lib/kubelet/plugins/gcsfuse.csi.storage.gke.io/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fbltq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry/
    HostPathType:  Directory
  kubelet-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods/
    HostPathType:  Directory
  socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/gcsfuse.csi.storage.gke.io/
    HostPathType:  DirectoryOrCreate
  kube-api-access-fbltq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  44m   default-scheduler  Successfully assigned gcs-fuse-csi-driver/gcsfusecsi-node-tr6wt to gke-flux-cluster-default-pool-a53eb99b-6ns7
  Normal  Pulling    44m   kubelet            Pulling image "jiaxun/gcs-fuse-csi-driver:v0.1.2-0-gd9e3bdd"
  Normal  Pulled     44m   kubelet            Successfully pulled image "jiaxun/gcs-fuse-csi-driver:v0.1.2-0-gd9e3bdd" in 4.025131123s
  Normal  Created    44m   kubelet            Created container gcs-fuse-csi-driver
  Normal  Started    44m   kubelet            Started container gcs-fuse-csi-driver
  Normal  Pulled     44m   kubelet            Container image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0" already present on machine
  Normal  Created    44m   kubelet            Created container csi-driver-registrar
  Normal  Started    43m   kubelet            Started container csi-driver-registrar

I can't think of anything else to show - I hope you can help! Also I heard they are deprecating sidecar contiainers? Is that possibly related / will this not work in the future?

songjiaxun commented 1 year ago

Hi @vsoch , I am trying to understand the use case you are testing. Are you trying the dynamic or static provisioning? Seems like you are using a PVC without binding it to a PV, which indicates that you are probably using the dynamic provisioning. Please note that the dynamic provisioning is not officially supported for now.

As for the question of "deprecating sidecar containers", what sidecar containers are you referring to here? Can you provide more information and clarification? Thank you!

songjiaxun commented 1 year ago

Closing this issue for now as no detailed information is provided.

Feel free to reopen it if you are still experiencing the same issue.

emouawad commented 2 weeks ago

@songjiaxun Is Dynamic provisioning using only a PVC still not supported? no mention of it in the docs. you said "is not officially supported" - does that mean it's still possible but not officially?

songjiaxun commented 2 weeks ago

Hi @emouawad , currently, only PV/PVC (static provisioning) and CSI inline volumes are supported. Dynamic provisioning support is on our roadmap, but without any timeline.

As you can see, the repo does include a controller. If you like, you can build the image on your own and try out Dynamic provisioning -- it's possible, but we've not tested the feature for a long time.