Closed git-yww closed 3 months ago
Propose another way here:
Before execute the lvcreate command, we first enter the mount namespace of the host machine by nsenter. To see host mount namespace in the lvm node container, a "hostPID=true" is also needed for the lvm node pod spec. After some tests, it appeared that the problem of lv invisibility could be well solved by this solution.
Hi @git-yww , Thanks for reporting . However, we see no difference in lvs and vgs output in container and host on lvm driver v1.3.0 and Ubuntu 20.0.4. If it is reproducible on 1.3 as well let us know.
From the information given, the lvm version is 2.02 and has --enable-lvmetad
which is enabling caching of metadata. Hence unless pvscan is done, the info wasn't refreshed. With lvm driver 2.03, this caching daemon behaviour is removed, and metadata is read form the disk instead of cache. Please confirm if that resolves the issue, or if still using 2.02 please disable the daemon and try again.
Meanwhile I'm closing this issue, please feel free to update any new information or reopen in case of any issues around this.
We created a new pod with a lvm pvc, it started successfully and everything seemed ok.
With 'lvs' command executed in the csi container, we could find the new lv:
bash-5.0# lvs | grep vg01
pvc-dcd29be5-41c6-46b0-bd81-93e45056ee95 vg01 -wi-ao---- 4.00g
However, when we executed 'lvs' command on host, no lv existed...[ivan@k8s-node16 ~]$ sudo lvs | grep vg01
[ivan@k8s-node16 ~]$
Also,the vg we used showed differences: vgs in csi container:
bash-5.0# vgs | grep vg01
vg01 1 1 0 wz--n- <19.53g <15.53g
vgs on host:[ivan@k8s-node16 ~]$ sudo vgs | grep vg01
vg01 1 0 0 wz--n- <19.53g <19.53g
But, on the host, we could find the new lv under the /dev/vg01 and /dev/mapper.
[ivan@k8s-node16 ~]$ sudo ls /dev/vg01
pvc-dcd29be5-41c6-46b0-bd81-93e45056ee95
[ivan@k8s-node16 ~]$ sudo ls /dev/mapper | grep vg01
vg01-pvc--dcd29be5--41c6--46b0--bd81--93e45056ee95
Is this related to the diff of lvm version in container and on host, or something else like the centos7 os of host? lvm version in container:
bash-5.0# lvm version LVM version: 2.02.186(2) (2019-08-27) Library version: 1.02.164 (2019-08-27) Driver version: 4.39.0 Configuration: ./configure --build=x86_64-alpine-linux-musl --host=x86_64-alpine-linux-musl --prefix=/usr --sysconfdir=/etc --libdir=/lib --sbindir=/sbin --localstatedir=/var --disable-nls --disable-readline --enable-pkgconfig --enable-applib --with-thin=internal --enable-dmeventd --enable-cmdlib --with-thin-check=/sbin/thin_check --with-thin-dump=/sbin/thin_dump --with-thin-repair=/sbin/thin_repair --with-dmeventd-path=/sbin/dmeventd --enable-udev_rules CLDFLAGS=-Wl,--as-needed
lvm version on host:LVM version: 2.02.171(2)-RHEL7 (2017-05-03) Library version: 1.02.140-RHEL7 (2017-05-03) Driver version: 4.39.0 Configuration: ./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --program-prefix= --disable-dependency-tracking --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --with-default-dm-run-dir=/run --with-default-run-dir=/run/lvm --with-default-pid-dir=/run --with-default-locking-dir=/run/lock/lvm --with-usrlibdir=/usr/lib64 --enable-lvm1_fallback --enable-fsadm --with-pool=internal --enable-write_install --with-user= --with-group= --with-device-uid=0 --with-device-gid=6 --with-device-mode=0660 --enable-pkgconfig --enable-applib --enable-cmdlib --enable-dmeventd --enable-blkid_wiping --enable-python2-bindings --with-cluster=internal --with-clvmd=corosync --enable-cmirrord --with-udevdir=/usr/lib/udev/rules.d --enable-udev_sync --with-thin=internal --enable-lvmetad --with-cache=internal --enable-lvmpolld --enable-lvmlockd-dlm --enable-lvmlockd-sanlock --enable-dmfilemapd
It would be greatly appreciated if anyone could help.