Closed foreversmart closed 9 months ago
/cc
@foreversmart which storage driver are you using for your PVC?
/assign
@alicefr I use rancher.io/local-path
@foreversmart, @alicefr rancher local-path provisioner does not respect fsGroup
by default. This might be the reason.
Here is some context: https://github.com/rancher/local-path-provisioner#volume-types
If I remember correctly, the default volume type hostPath
does not support fsGroup
. It should work if switching to local
by adding the respective annotation to the storage class or PVC.
@vasiliy-ul but from the comment above, even root (uid=0) doesn't work
True, but I would still give it a try with local
. Or maybe with some other provisioner respecting fsGroup
.
when I run as root(uid=0) the pod securityContext is below:
securityContext:
fsGroup: 0
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
seccompProfile:
type: RuntimeDefault
ls -l
-rw-rw---- 1 107 107 202937204736 Sep 15 10:03 disk.img
It seems fs group is not correct set by rancher.io/local-path provisioner but root user still not work
id
uid=0 gid=0(root) groups=0(root)
The disk belongs to 107
this might be the reason why 0
doesn't work
My observations so far:
rancher.io/local-path
provisioner does not support fsGroup
. Therefore, if disk.img
is owned by 107:107
then it would make sense to run the guestfs pod with this uid: virtctl guestfs --uid 107 ...
fsGroup
support in rancher.io/local-path
. For that, there is a need to create a new storage class with annotation defaultVolumeType: local
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
defaultVolumeType: local
name: local-test
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
But then you will still need to explicitly specify the group for the guestfs pod virtctl guestfs --fsGroup 107 ...
since by default it runs under 1001
user. And ofc the PVC needs to be re-created using the new storage class.
--root
does not work. On my test cluster, it actually does work but in my case the image in the guestfs pod gets -rw-rw---- 1 qemu root 506462208 Sep 20 07:59 disk.img
(i.e. qemu:root
instead of qemu:qemu
)And one more thing:
Kubernetes version (use kubectl version): v1.21.11
The version is very old imho. Better to use smth newer >= 1.25.x
UPD: also rancher.io/local-path
should be >= v0.0.24
for the trick with the storage class (to enable fsGroup
support)
Thanks @vasiliy-ul @alicefr
I use the uid 107 and successfully run the command virt-cat -a disk.img /etc/os-release
.
Finally I find the reason root user can't execute above command.
Because root user without linux capability cap_dac_override cannot access other users' files.
What happened: When I use
virtctl guestfs
to modifying VM disk images,I got an error access: disk.img: Permission denied. The command isvirt-cat -a disk.img /etc/os-release
. Then I guess the reason must be user rights, so I add --root flag tovirtctl guestfs
. But the error still exist, execute ls -l-rw-rw---- 1 107 107 202937204736 Sep 15 10:03 disk.img
the disk.img is exist and have rw permissionI find out libguestfs-tools need some Linux capabilities
but I find the guestfs pod remove all capability at https://github.com/kubevirt/kubevirt/blob/main/pkg/virtctl/guestfs/guestfs.go
When I add these capabilityes
cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_mknod,cap_audit_write,cap_setfcap
to containerSecurityContext manually. I can execute commandvirt-cat -a disk.img /etc/os-release
successfully.What you expected to happen: I can execute virtctl guestfs success
How to reproduce it (as minimally and precisely as possible):
Additional context:
Environment:
virtctl version
): client v1.0.0 server v1.0.0kubectl version
): v1.21.11uname -a
): Linux 5.4.0-125-generic #141-Ubuntu SMP Wed Aug 10 13:42:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux