Closed Ramshield closed 1 year ago
The logs you added are from an external-snapshotter that is not active. Please check the leases in the namespace where you deployed the provisioner, and provide the logs of the active external-snapshotter container.
The logs from the cephcsi-rbd-plugin do not show any gRPC requests for CreateSnapshot. That means it are the logs from a wrong (inactive) provisioner, or snapshot creation is not reaching the provisioner (potentially stuck, or in error at the external-snapshotter).
@nixpanic Can you please let me know what else to install? As all I installed was ceph-csi-rbd helm chart:
# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
cephrbd-storage cephrbd-storage-ceph-csi-rbd-nodeplugin-km2sf 3/3 Running 3 (4d1h ago) 6d18h
cephrbd-storage cephrbd-storage-ceph-csi-rbd-nodeplugin-mb9v4 3/3 Running 3 (4d1h ago) 6d18h
cephrbd-storage cephrbd-storage-ceph-csi-rbd-nodeplugin-wjj9d 3/3 Running 3 (4d1h ago) 6d18h
cephrbd-storage cephrbd-storage-ceph-csi-rbd-provisioner-7c74444cc5-cpdds 7/7 Running 0 30h
cephrbd-storage cephrbd-storage-ceph-csi-rbd-provisioner-7c74444cc5-kd7qb 7/7 Running 0 30h
cephrbd-storage cephrbd-storage-ceph-csi-rbd-provisioner-7c74444cc5-msj7c 7/7 Running 0 30h
kube-system coredns-597584b69b-jtr6b 1/1 Running 3 (4d1h ago) 34d
kube-system metrics-server-5c8978b444-hgzdr 1/1 Running 3 (4d1h ago) 34d
metallb-system metallb-controller-7898b886f6-k4ft2 1/1 Running 3 (4d1h ago) 36d
metallb-system metallb-speaker-25zkk 1/1 Running 3 (4d1h ago) 36d
metallb-system metallb-speaker-cgtvd 1/1 Running 3 (4d1h ago) 36d
metallb-system metallb-speaker-hz6dm 1/1 Running 3 (4d1h ago) 36d
nginx-ingress nginx-ingress-nginx-ingress-56f6f8d48c-4mcf9 1/1 Running 0 30h
What step did I miss in the README? Thank you!
I don't think you need to install more. You would need to check the right logs to see what the problem could be.
@nixpanic Which ones? the csi-rbdplugin logs I sent are the only ones that actually had something in them. Or do you want me to request all the logs from all the pods..
you can use kubectl -n cephrbd-storage get leases
and see which pod/container is active for a certain task. The external-snapshotter container should have a lease, and that rbd-provisioner should have the logs that provides a hint on what is failing.
Same result, nothing in the logs...
root@jumphost-01:~# kubectl -n cephrbd-storage get leases
NAME HOLDER AGE
external-attacher-leader-rbd-csi-ceph-com cephrbd-storage-ceph-csi-rbd-provisioner-7c74444cc5-cpdds 7d19h
external-resizer-rbd-csi-ceph-com cephrbd-storage-ceph-csi-rbd-provisioner-7c74444cc5-cpdds 7d19h
external-snapshotter-leader-rbd-csi-ceph-com cephrbd-storage-ceph-csi-rbd-provisioner-7c74444cc5-cpdds 7d19h
rbd-csi-ceph-com 1679829931631-8081-rbd-csi-ceph-com 7d19h
rbd.csi.ceph.com-cephrbd-storage cephrbd-storage-ceph-csi-rbd-provisioner-7c74444cc5-cpdds_51b31664-89bb-4f1e-8f24-73ebe84a7ebe 7d19h
root@jumphost-01:~# kubectl -n cephrbd-storage logs pods/cephrbd-storage-ceph-csi-rbd-provisioner-7c74444cc5-cpdds -c csi-snapshotter
I0326 11:25:30.576234 1 main.go:104] Version: v6.1.0
I0326 11:25:31.586137 1 common.go:111] Probing CSI driver for readiness
I0326 11:25:31.588945 1 leaderelection.go:248] attempting to acquire leader lease cephrbd-storage/external-snapshotter-leader-rbd-csi-ceph-com...
I0326 11:25:50.902288 1 leaderelection.go:258] successfully acquired lease cephrbd-storage/external-snapshotter-leader-rbd-csi-ceph-com
I0326 11:25:50.902650 1 snapshot_controller_base.go:133] Starting CSI snapshotter
I0326 13:07:58.856702 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF
I0326 13:07:58.856705 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF
@Ramshield I see you dont have snapshot controller running did you follow the steps mentioned here https://github.com/kubernetes-csi/external-snapshotter/#usage
@Madhu-1 Thank you very much, that fixed it! :) Much appreciated!
Describe the bug
A clear and concise description of what the bug is.
Environment details
Linux k8s-worker-01 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64 GNU/Linux
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) : ?Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4+k3s1", GitCommit:"0dc63334c0db3e7b99244427615e091909fc486e", GitTreeState:"clean", BuildDate:"2022-11-18T18:17:40Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}
Steps to reproduce
Create volumesnapshotclass:
Create sc:
Create PVC:
Create snapshot:
Actual results
Nothing... Nothing happens.
Expected behavior
A snapshot.
Logs
ccsi-rbdplugin: https://pastebin.com/MLamjf8Y
Additional context
Even though logs complain about some empty Secret, it is there and filled in?
Running k3s: