Closed lincate closed 1 year ago
Some questions:
It maybe similar to https://github.com/rook/rook/issues/10110
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.
Is this a bug report or feature request?
Deviation from expected behavior: After create Storageclass rook-ceph-block,
provisioner: rook-ceph.rbd.csi.ceph.com
, monitor will rebooting again and again. Then, the PVC which reference rook-ceph-block status is pending.Expected behavior: Storageclass can be created, PVC can be bound.
How to reproduce it (minimal and precise): kubectl create -f crds.yaml -f commons.yaml -f operator.yaml kubectl create -f cluster.yaml after that, ceph-cluster status is HEALTH_OK then kubectl create -f storageclass.yaml
File(s) to submit:
cluster.yaml
, if necessaryLogs to submit:
Info: Checking mon quorum and ceph health details
HEALTH_OK
after create storageclass: one of monitor log is: debug 2022-09-29T09:10:21.318+0000 7f937291a700 0 log_channel(cluster) log [DBG] : fsmap debug 2022-09-29T09:10:21.318+0000 7f937291a700 0 log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in debug 2022-09-29T09:10:21.319+0000 7f937291a700 0 log_channel(cluster) log [DBG] : mgrmap e30: no daemons active (since 18s) debug 2022-09-29T09:10:21.320+0000 7f937291a700 0 log_channel(cluster) log [WRN] : Health check update: 1/3 mons down, quorum a,c (MON_DOWN) debug 2022-09-29T09:10:59.737+0000 7f9371117700 1 mon.a@0(leader) e3 handle_auth_request failed to assign global_id debug 2022-09-29T09:11:15.942+0000 7f9370916700 1 mon.a@0(leader) e3 handle_auth_request failed to assign global_id debug 2022-09-29T09:11:25.176+0000 7f9378125700 -1 received signal: Terminated from Kernel ( Could be generated by pthread_kill(), raise(), abort(), alarm() ) UID: 0 debug 2022-09-29T09:11:25.176+0000 7f9378125700 -1 mon.a@0(leader) e3 Got Signal Terminated debug 2022-09-29T09:11:25.176+0000 7f9378125700 1 mon.a@0(leader) e3 shutdown
Crashing pod(s) logs, if necessary
To get logs, use
kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use theinsert code
button from the Github UI. Read GitHub documentation if you need help.Cluster Status to submit:
Output of krew commands, if necessary
To get the health of the cluster, use
kubectl rook-ceph health
To get the status of the cluster, usekubectl rook-ceph ceph status
For more details, see the Rook Krew PluginEnvironment:
uname -a
): Linux 4.18.0-147.5.1.6.h579.x86_64rook version
inside of a Rook Pod): 1.10.1 & 1.10.0ceph -v
): 17.2.3kubectl version
): 1.25.0 & 1.24.2ceph health
in the Rook Ceph toolbox):