Open gotoweb opened 1 month ago
@gotoweb this might be cause of https://github.com/ceph/ceph-csi/issues/4138?
Are you able to load the ceph module manually from the node?
@Madhu-1 I don't think so. The volume mounts fine on another k8s cluster run as same version of kubelet. I haven't tried to load the ceph module manually, I just use built-in ceph module on proxmox. I'll try to change version of driver/csi images...
@gotoweb, the Ceph-CSI driver loads the kernel module that is provided by a host-path volume. If the module is already loaded (or built-in) , it should not try to load it again.
Commit ab87045afb0c15ca4d30ae01003ed0f331843181 checks for the support of the cephfs filesystem, it is included in Ceph-CSI 3.11 and was backported to 3.10 with #4381.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
Describe the bug
In response to a gRPC
/csi.v1.Node/NodeStageVolume
request from the csi plugin, the volume mount fails with the following error.When I checked dmesg, I found the log
Invalid ELF header magic: != \x7fELF
I hope this isn't a bug, but it seems to be out of my control.Environment details
csi-node-driver-registrar:v2.9.3
,cephcsi:canary
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) : kernelSteps to reproduce
cephfs.csi.ceph.com
provisioner.Actual results
Pods attempting to mount the volume received the following error message from the kubelet
I found the following message from the cephfs plugin pod.
The dmesg log looks like this:
Expected behavior
Interestingly, it mounts successfully on other K8S clusters using the same Ceph cluster. I was able to check the logs from that cephfs plugin.
The storageclass and configmap (config.json) of the k8s cluster where the error occurs, and the k8s cluster that is working correctly, match completely.