openebs-archive / device-localpv

CSI Driver for using Local Block Devices
Apache License 2.0
26 stars 17 forks source link

In k8s v1.21 Pod is not able to schedule on nodes #36

Closed w3aman closed 3 years ago

w3aman commented 3 years ago

With k8s 1.21 and device-localpv -- waitforfirstconsumer pvc's are not getting bound, while the same is working in k8s 1.20

Events:
  Type    Reason                Age                    From                         Message
  ----    ------                ----                   ----                         -------
  Normal  WaitForFirstConsumer  3m34s (x5 over 4m23s)  persistentvolume-controller  waiting for first consumer to be created before binding
  Normal  WaitForPodScheduled   4s (x14 over 3m19s)    persistentvolume-controller  waiting for pod app-busybox-55569b5cc8-nwkxn to be scheduled

Pod keeps in pending with this error:

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  16s   default-scheduler  0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) did not have enough free storage.
  Warning  FailedScheduling  14s   default-scheduler  0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) did not have enough free storage.

on nodes we can see, we have 50 G disk (/dev/sde) , while pvc is only for 4 G. There are no logs also in controller

k8s@lvm-node1:~$ lsblk | grep sd
sda      8:0    0  100G  0 disk 
├─sda1   8:1    0  512M  0 part /boot/efi
├─sda2   8:2    0    1K  0 part 
└─sda5   8:5    0 99.5G  0 part /
sdb      8:16   0   50G  0 disk 
sdc      8:32   0   50G  0 disk 
sdd      8:48   0   50G  0 disk 
sde      8:64   0   50G  0 disk 
└─sde1   8:65   0    9M  0 part 
sdf      8:80   0   50G  0 disk 
davidkarlsen commented 3 years ago

+1 Seeing the same on openshift: v1.21.1+9807387

w3aman commented 3 years ago

@davidkarlsen After this PR https://github.com/openebs/device-localpv/pull/37/files, it worked for me. Can you once verify, if all the images are pulled with latest changes, (with imagePullPolicy: Always)

Yaml is here with latest changes:

davidkarlsen commented 3 years ago

Tags are immutable so that should be fine. Has a release been made which contains it (I'm on a phone right now 😂)

pawanpraka1 commented 3 years ago

@davidkarlsen how did you install the device driver (helm or operator yaml)? Which release version you are using?

davidkarlsen commented 3 years ago

@davidkarlsen how did you install the device driver (helm or operator yaml)? Which release version you are using?

Via the parent chart which installs the full openebs-stack.

davidkarlsen commented 3 years ago

But I use lvm-local-pv (for filsystem) - same bug there?

pawanpraka1 commented 3 years ago

We have fixed the helm chart for lvm localpv recently in the local repo for k8s 1.21+. You can try the helm chart from the repo and see.

pawanpraka1 commented 3 years ago

I see that we have fixed the master helm chart also https://github.com/openebs/charts/pull/264. You can install the latest one and see if it is working.

davidkarlsen commented 3 years ago

I see that we have fixed the master helm chart also openebs/charts#264. You can install the latest one and see if it is working.

2.12.2 worked fine! Thanks 👍

Littlehhao commented 8 months ago

This happened to me when I installed it using the operator. ➜ kubectl apply -f https://raw.githubusercontent.com/openebs/device-localpv/develop/deploy/device-operator.yaml

Littlehhao commented 8 months ago

I'm using device-local-pv - same bug there?