rook / rook

Storage Orchestration for Kubernetes
https://rook.io
Apache License 2.0
12.24k stars 2.68k forks source link

osd pods or osd-prepared pods are not creating #13657

Closed vadlakiran closed 5 months ago

vadlakiran commented 7 months ago

Hi Team,

we are facing some issue when we install the rook-ceph(v1.8.1) in k8s cluster mon's are coming too late and osd pods are not coming up.

below are the operator pod logs

kubectl logs -f rook-ceph-operator-795b5c88cb-672t2 -n rook-ceph
2024-01-31 04:19:39.744869 I | rookcmd: starting Rook v1.8.1 with arguments '/usr/local/bin/rook ceph operator'
2024-01-31 04:19:39.744936 I | rookcmd: flag values: --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-level=INFO, --operator-image=, --service-account=
2024-01-31 04:19:39.744938 I | cephcmd: starting Rook-Ceph operator
2024-01-31 04:19:40.890902 I | cephcmd: base ceph version inside the rook operator image is "ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)"
2024-01-31 04:19:40.903104 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var)
2024-01-31 04:19:40.903142 I | operator: watching all namespaces for Ceph CRs
2024-01-31 04:19:40.903381 I | operator: setting up schemes
2024-01-31 04:19:40.909927 I | operator: setting up the controller-runtime manager
I0131 04:19:41.960739       1 request.go:665] Waited for 1.043472734s due to client-side throttling, not priority and fairness, request: GET:https://10.233.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s
2024-01-31 04:19:44.513873 I | operator: looking for admission webhook secret "rook-ceph-admission-controller"
2024-01-31 04:19:44.518380 I | operator: admission webhook secret "rook-ceph-admission-controller" not found. proceeding without the admission controller
2024-01-31 04:19:44.518450 I | ceph-cluster-controller: successfully started
2024-01-31 04:19:44.518561 I | ceph-cluster-controller: enabling hotplug orchestration
2024-01-31 04:19:44.518599 I | ceph-crashcollector-controller: successfully started
2024-01-31 04:19:44.518630 I | ceph-block-pool-controller: successfully started
2024-01-31 04:19:44.518666 I | ceph-object-store-user-controller: successfully started
2024-01-31 04:19:44.518697 I | ceph-object-realm-controller: successfully started
2024-01-31 04:19:44.518724 I | ceph-object-zonegroup-controller: successfully started
2024-01-31 04:19:44.518744 I | ceph-object-zone-controller: successfully started
2024-01-31 04:19:44.518908 I | ceph-object-controller: successfully started
2024-01-31 04:19:44.518973 I | ceph-file-controller: successfully started
2024-01-31 04:19:44.519033 I | ceph-nfs-controller: successfully started
2024-01-31 04:19:44.519078 I | ceph-rbd-mirror-controller: successfully started
2024-01-31 04:19:44.519118 I | ceph-client-controller: successfully started
2024-01-31 04:19:44.519158 I | ceph-filesystem-mirror-controller: successfully started
2024-01-31 04:19:44.519196 I | operator: rook-ceph-operator-config-controller successfully started
2024-01-31 04:19:44.519233 I | ceph-csi: rook-ceph-operator-csi-controller successfully started
2024-01-31 04:19:44.519266 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started
2024-01-31 04:19:44.519289 I | ceph-bucket-topic: successfully started
2024-01-31 04:19:44.519315 I | ceph-bucket-notification: successfully started
2024-01-31 04:19:44.519333 I | ceph-bucket-notification: successfully started
2024-01-31 04:19:44.520893 I | operator: starting the controller-runtime manager
2024-01-31 04:19:45.224387 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap)
2024-01-31 04:19:45.224426 I | op-k8sutil: ROOK_LOG_LEVEL="INFO" (configmap)
2024-01-31 04:19:45.224443 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap)
2024-01-31 04:19:45.232313 I | operator: rook-ceph-operator-config-controller done reconciling
2024-01-31 04:20:45.586452 I | clusterdisruption-controller: create event from ceph cluster CR
2024-01-31 04:20:45.586675 I | ceph-spec: adding finalizer "cephcluster.ceph.rook.io" on "rook-ceph"
2024-01-31 04:20:45.609729 I | clusterdisruption-controller: deleted all legacy node drain canary pods
2024-01-31 04:20:45.620558 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph"
2024-01-31 04:20:45.624644 I | ceph-cluster-controller: clusterInfo not yet found, must be a new cluster.
2024-01-31 04:20:45.633327 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config"
2024-01-31 04:20:45.647287 I | op-k8sutil: ROOK_CSI_ENABLE_RBD="true" (configmap)
2024-01-31 04:20:45.647329 I | op-k8sutil: ROOK_CSI_ENABLE_CEPHFS="true" (configmap)
2024-01-31 04:20:45.647340 I | op-k8sutil: ROOK_CSI_ALLOW_UNSUPPORTED_VERSION="false" (configmap)
2024-01-31 04:20:45.647347 I | op-k8sutil: ROOK_CSI_ENABLE_GRPC_METRICS="false" (configmap)
2024-01-31 04:20:45.647357 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default)
2024-01-31 04:20:45.647365 I | op-k8sutil: ROOK_CSI_CEPH_IMAGE="docker-registry.com:5000/cephcsi/cephcsi:v3.4.0" (configmap)
2024-01-31 04:20:45.647375 I | op-k8sutil: ROOK_CSI_REGISTRAR_IMAGE="docker-registry.com:5000/sig-storage/csi-node-driver-registrar:v2.3.0" (configmap)
2024-01-31 04:20:45.647383 I | op-k8sutil: ROOK_CSI_PROVISIONER_IMAGE="docker-registry.com:5000/sig-storage/csi-provisioner:v3.0.0" (configmap)
2024-01-31 04:20:45.647393 I | op-k8sutil: ROOK_CSI_ATTACHER_IMAGE="docker-registry.com:5000/sig-storage/csi-attacher:v3.3.0" (configmap)
2024-01-31 04:20:45.647405 I | op-k8sutil: ROOK_CSI_SNAPSHOTTER_IMAGE="docker-registry.com:5000/sig-storage/csi-snapshotter:v4.2.0" (configmap)
2024-01-31 04:20:45.647416 I | op-k8sutil: ROOK_CSI_KUBELET_DIR_PATH="/var/lib/kubelet" (default)
2024-01-31 04:20:45.647429 I | op-k8sutil: CSI_VOLUME_REPLICATION_IMAGE="quay.io/csiaddons/volumereplication-operator:v0.1.0" (default)
2024-01-31 04:20:45.647438 I | op-k8sutil: ROOK_CSI_CEPHFS_POD_LABELS="" (default)
2024-01-31 04:20:45.647446 I | op-k8sutil: ROOK_CSI_RBD_POD_LABELS="" (default)
2024-01-31 04:20:45.647460 I | ceph-csi: detecting the ceph csi image version for image "docker-registry.com:5000/cephcsi/cephcsi:v3.4.0"
2024-01-31 04:20:45.647606 I | op-k8sutil: CSI_PROVISIONER_TOLERATIONS="" (default)
2024-01-31 04:20:45.647628 I | op-k8sutil: CSI_PROVISIONER_NODE_AFFINITY="" (default)
2024-01-31 04:20:45.653094 I | ceph-spec: detecting the ceph image version for image docker-registry.com:5000/ceph/ceph:v16.2.7...
2024-01-31 04:20:48.920833 I | ceph-spec: detected ceph image version: "16.2.7-0 pacific"
2024-01-31 04:20:48.920893 I | ceph-cluster-controller: validating ceph version from provided image
2024-01-31 04:20:48.936146 I | ceph-cluster-controller: cluster "rook-ceph": version "16.2.7-0 pacific" detected for image "docker-registry.com:5000/ceph/ceph:v16.2.7"
2024-01-31 04:20:48.967062 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:True Reason:ClusterProgressing Message:Configuring the Ceph cluster LastHeartbeatTime:2024-01-31 04:20:48.956320015 +0000 UTC m=+69.246387684 LastTransitionTime:2024-01-31 04:20:48.956319735 +0000 UTC m=+69.246387457}. failed to update object "rook-ceph/rook-ceph" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph": the object has been modified; please apply your changes to the latest version and try again
2024-01-31 04:20:48.984525 I | op-mon: start running mons
2024-01-31 04:20:49.062420 I | op-mon: creating mon secrets for a new cluster
2024-01-31 04:20:49.077348 I | op-mon: existing maxMonID not found or failed to load. configmaps "rook-ceph-mon-endpoints" not found
2024-01-31 04:20:49.084113 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":[]}] data: mapping:{"node":{}} maxMonId:-1]
2024-01-31 04:20:49.507413 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2024-01-31 04:20:49.507646 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2024-01-31 04:20:50.710286 I | ceph-csi: Detected ceph CSI image version: "v3.4.0"
2024-01-31 04:20:50.722353 I | op-k8sutil: CSI_FORCE_CEPHFS_KERNEL_CLIENT="true" (configmap)
2024-01-31 04:20:50.722412 I | op-k8sutil: CSI_CEPHFS_GRPC_METRICS_PORT="9091" (default)
2024-01-31 04:20:50.722423 I | op-k8sutil: CSI_CEPHFS_GRPC_METRICS_PORT="9091" (default)
2024-01-31 04:20:50.722431 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default)
2024-01-31 04:20:50.722437 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default)
2024-01-31 04:20:50.722444 I | op-k8sutil: CSI_RBD_GRPC_METRICS_PORT="9090" (default)
2024-01-31 04:20:50.722451 I | op-k8sutil: CSI_RBD_GRPC_METRICS_PORT="9090" (default)
2024-01-31 04:20:50.722457 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default)
2024-01-31 04:20:50.722463 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default)
2024-01-31 04:20:50.722469 I | op-k8sutil: CSI_PLUGIN_PRIORITY_CLASSNAME="" (default)
2024-01-31 04:20:50.722475 I | op-k8sutil: CSI_PROVISIONER_PRIORITY_CLASSNAME="" (default)
2024-01-31 04:20:50.722480 I | op-k8sutil: CSI_ENABLE_OMAP_GENERATOR="false" (default)
2024-01-31 04:20:50.722487 I | op-k8sutil: CSI_ENABLE_RBD_SNAPSHOTTER="true" (configmap)
2024-01-31 04:20:50.722492 I | op-k8sutil: CSI_ENABLE_CEPHFS_SNAPSHOTTER="true" (configmap)
2024-01-31 04:20:50.722498 I | op-k8sutil: CSI_ENABLE_VOLUME_REPLICATION="false" (configmap)
2024-01-31 04:20:50.722506 I | op-k8sutil: CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default)
2024-01-31 04:20:50.722512 I | op-k8sutil: CSI_RBD_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default)
2024-01-31 04:20:50.722520 I | op-k8sutil: CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT="false" (configmap)
2024-01-31 04:20:50.722525 I | ceph-csi: Kubernetes version is 1.21
2024-01-31 04:20:50.722534 I | op-k8sutil: ROOK_CSI_RESIZER_IMAGE="docker-registry.com:5000/sig-storage/csi-resizer:v1.3.0" (configmap)
2024-01-31 04:20:50.722544 I | op-k8sutil: CSI_LOG_LEVEL="" (default)
2024-01-31 04:20:51.111897 I | op-k8sutil: CSI_PROVISIONER_REPLICAS="2" (configmap)
2024-01-31 04:20:51.125746 I | op-k8sutil: CSI_PROVISIONER_TOLERATIONS="" (default)
2024-01-31 04:20:51.125775 I | op-k8sutil: CSI_PROVISIONER_NODE_AFFINITY="" (default)
2024-01-31 04:20:51.125780 I | op-k8sutil: CSI_PLUGIN_TOLERATIONS="" (default)
2024-01-31 04:20:51.125785 I | op-k8sutil: CSI_PLUGIN_NODE_AFFINITY="" (default)
2024-01-31 04:20:51.125789 I | op-k8sutil: CSI_RBD_PLUGIN_TOLERATIONS="" (default)
2024-01-31 04:20:51.125793 I | op-k8sutil: CSI_RBD_PLUGIN_NODE_AFFINITY="" (default)
2024-01-31 04:20:51.125797 I | op-k8sutil: CSI_RBD_PLUGIN_RESOURCE="" (default)
2024-01-31 04:20:51.146324 I | op-k8sutil: CSI_RBD_PROVISIONER_TOLERATIONS="" (default)
2024-01-31 04:20:51.146346 I | op-k8sutil: CSI_RBD_PROVISIONER_NODE_AFFINITY="" (default)
2024-01-31 04:20:51.146350 I | op-k8sutil: CSI_RBD_PROVISIONER_RESOURCE="" (default)
2024-01-31 04:20:51.171265 I | ceph-csi: successfully started CSI Ceph RBD driver
2024-01-31 04:20:51.321790 I | op-mon: targeting the mon count 3
2024-01-31 04:20:51.342979 I | op-mon: created canary deployment rook-ceph-mon-a-canary
2024-01-31 04:20:51.371021 I | op-mon: created canary deployment rook-ceph-mon-b-canary
2024-01-31 04:20:51.407831 I | op-mon: created canary deployment rook-ceph-mon-c-canary
2024-01-31 04:20:51.566555 I | op-k8sutil: CSI_CEPHFS_PLUGIN_TOLERATIONS="" (default)
2024-01-31 04:20:51.566624 I | op-k8sutil: CSI_CEPHFS_PLUGIN_NODE_AFFINITY="" (default)
2024-01-31 04:20:51.566636 I | op-k8sutil: CSI_CEPHFS_PLUGIN_RESOURCE="" (default)
2024-01-31 04:20:51.592971 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_TOLERATIONS="" (default)
2024-01-31 04:20:51.593012 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_NODE_AFFINITY="" (default)
2024-01-31 04:20:51.593022 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_RESOURCE="" (default)
2024-01-31 04:20:51.624593 I | ceph-csi: successfully started CSI CephFS driver
2024-01-31 04:20:52.321006 I | op-k8sutil: CSI_RBD_FSGROUPPOLICY="ReadWriteOnceWithFSType" (configmap)
2024-01-31 04:20:52.337493 I | ceph-csi: CSIDriver object created for driver "rook-ceph.rbd.csi.ceph.com"
2024-01-31 04:20:52.337553 I | op-k8sutil: CSI_CEPHFS_FSGROUPPOLICY="None" (configmap)
2024-01-31 04:20:52.346649 I | ceph-csi: CSIDriver object created for driver "rook-ceph.cephfs.csi.ceph.com"
2024-01-31 04:20:52.508282 I | op-mon: canary monitor deployment rook-ceph-mon-a-canary scheduled to mongodb2
2024-01-31 04:20:52.508313 I | op-mon: mon a assigned to node mongodb2
2024-01-31 04:20:52.707549 I | op-mon: canary monitor deployment rook-ceph-mon-b-canary scheduled to worker02
2024-01-31 04:20:52.707585 I | op-mon: mon b assigned to node worker02
2024-01-31 04:20:52.906356 I | op-mon: canary monitor deployment rook-ceph-mon-c-canary scheduled to worker01
2024-01-31 04:20:52.906402 I | op-mon: mon c assigned to node worker01
2024-01-31 04:20:52.917764 I | op-mon: cleaning up canary monitor deployment "rook-ceph-mon-a-canary"
2024-01-31 04:20:52.927860 I | op-mon: cleaning up canary monitor deployment "rook-ceph-mon-b-canary"
2024-01-31 04:20:52.975157 I | op-mon: cleaning up canary monitor deployment "rook-ceph-mon-c-canary"
2024-01-31 04:20:53.028415 I | op-mon: creating mon a
2024-01-31 04:20:53.184592 I | op-mon: mon "a" endpoint is [v2:10.233.33.188:3300,v1:10.233.33.188:6789]
2024-01-31 04:20:53.710894 I | op-mon: monitor endpoints changed, updating the bootstrap peer token
2024-01-31 04:20:53.711003 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.233.33.188:6789"]}] data:a=10.233.33.188:6789 mapping:{"node":{"a":{"Name":"mongodb2","Hostname":"mongodb2","Address":"172.26.1.57"},"b":{"Name":"worker02","Hostname":"worker02","Address":"172.26.0.237"},"c":{"Name":"worker01","Hostname":"worker01","Address":"172.26.0.233"}}} maxMonId:-1]
2024-01-31 04:20:53.711036 I | op-mon: monitor endpoints changed, updating the bootstrap peer token
2024-01-31 04:20:54.307738 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2024-01-31 04:20:54.307945 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2024-01-31 04:20:54.717418 I | op-mon: 0 of 1 expected mons are ready. creating or updating deployments without checking quorum in attempt to achieve a healthy mon cluster
2024-01-31 04:20:54.953936 I | op-mon: updating maxMonID from -1 to 0 after committing mon "a"
2024-01-31 04:20:55.707451 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.233.33.188:6789"]}] data:a=10.233.33.188:6789 mapping:{"node":{"a":{"Name":"mongodb2","Hostname":"mongodb2","Address":"172.26.1.57"},"b":{"Name":"worker02","Hostname":"worker02","Address":"172.26.0.237"},"c":{"Name":"worker01","Hostname":"worker01","Address":"172.26.0.233"}}} maxMonId:0]
2024-01-31 04:20:55.707489 I | op-mon: waiting for mon quorum with [a]
2024-01-31 04:20:56.117744 I | op-mon: mon a is not yet running
2024-01-31 04:20:56.117792 I | op-mon: mons running: []
2024-01-31 04:20:56.306606 I | op-mon: parsing mon endpoints: a=10.233.33.188:6789
2024-01-31 04:20:56.306673 I | op-k8sutil: ROOK_OBC_WATCH_OPERATOR_NAMESPACE="true" (configmap)
2024-01-31 04:20:56.306680 I | op-bucket-prov: ceph bucket provisioner launched watching for provisioner "rook-ceph.ceph.rook.io/bucket"
2024-01-31 04:20:56.307720 I | op-bucket-prov: successfully reconciled bucket provisioner
I0131 04:20:56.307837       1 manager.go:135] objectbucket.io/provisioner-manager "msg"="starting provisioner"  "name"="rook-ceph.ceph.rook.io/bucket"
2024-01-31 04:21:16.242305 I | op-mon: mons running: [a]
2024-01-31 04:21:36.372853 I | op-mon: mons running: [a]
2024-01-31 04:21:56.507442 I | op-mon: mons running: [a]
2024-01-31 04:22:16.640019 I | op-mon: mons running: [a]
2024-01-31 04:22:36.766364 I | op-mon: mons running: [a]
2024-01-31 04:22:56.900685 I | op-mon: mons running: [a]
2024-01-31 04:23:17.036214 I | op-mon: mons running: [a]
2024-01-31 04:23:37.158008 I | op-mon: mons running: [a]
2024-01-31 04:23:57.289745 I | op-mon: mons running: [a]
2024-01-31 04:24:17.418868 I | op-mon: mons running: [a]
2024-01-31 04:24:37.552356 I | op-mon: mons running: [a]
2024-01-31 04:24:57.679921 I | op-mon: mons running: [a]
2024-01-31 04:25:17.815988 I | op-mon: mons running: [a]
2024-01-31 04:25:37.947630 I | op-mon: mons running: [a]
2024-01-31 04:25:58.079696 I | op-mon: mons running: [a]
2024-01-31 04:26:18.212728 I | op-mon: mons running: [a]
2024-01-31 04:26:38.337814 I | op-mon: mons running: [a]
2024-01-31 04:26:58.469950 I | op-mon: mons running: [a]
2024-01-31 04:27:18.602560 I | op-mon: mons running: [a]
2024-01-31 04:27:38.740226 I | op-mon: mons running: [a]
2024-01-31 04:27:58.872162 I | op-mon: mons running: [a]
2024-01-31 04:28:19.001903 I | op-mon: mons running: [a]
2024-01-31 04:28:39.136746 I | op-mon: mons running: [a]
2024-01-31 04:28:59.269134 I | op-mon: mons running: [a]
2024-01-31 04:29:19.405286 I | op-mon: mons running: [a]
2024-01-31 04:29:39.541549 I | op-mon: mons running: [a]
2024-01-31 04:29:59.682583 I | op-mon: mons running: [a]
2024-01-31 04:30:19.820507 I | op-mon: mons running: [a]
2024-01-31 04:30:39.965497 I | op-mon: mons running: [a]
2024-01-31 04:30:55.102730 E | ceph-cluster-controller: failed to reconcile CephCluster "rook-ceph/rook-ceph". failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed to create cluster: failed to start ceph monitors: failed to start mon pods: failed to check mon quorum a: failed to wait for mon quorum: exceeded max retry count waiting for monitors to reach quorum
2024-01-31 04:30:55.102766 I | op-k8sutil: Reporting Event rook-ceph:rook-ceph Warning:ReconcileFailed:failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed to create cluster: failed to start ceph monitors: failed to start mon pods: failed to check mon quorum a: failed to wait for mon quorum: exceeded max retry count waiting for monitors to reach quorum
2024-01-31 04:30:55.109825 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph"
2024-01-31 04:30:55.117495 I | op-mon: parsing mon endpoints: a=10.233.33.188:6789
2024-01-31 04:30:55.146057 I | ceph-spec: detecting the ceph image version for image docker-registry.com:5000/ceph/ceph:v16.2.7...
2024-01-31 04:30:58.236961 I | ceph-spec: detected ceph image version: "16.2.7-0 pacific"
2024-01-31 04:30:58.237016 I | ceph-cluster-controller: validating ceph version from provided image
2024-01-31 04:30:58.244205 I | op-mon: parsing mon endpoints: a=10.233.33.188:6789
2024-01-31 04:30:58.247694 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2024-01-31 04:30:58.247896 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2024-01-31 04:31:13.365072 E | ceph-cluster-controller: failed to get ceph daemons versions, this typically happens during the first cluster initialization. failed to run 'ceph versions'. . timed out: exit status 1
2024-01-31 04:31:13.365114 I | ceph-cluster-controller: cluster "rook-ceph": version "16.2.7-0 pacific" detected for image "docker-registry.com:5000/ceph/ceph:v16.2.7"
2024-01-31 04:31:13.428005 I | op-mon: start running mons
2024-01-31 04:31:13.434871 I | op-mon: parsing mon endpoints: a=10.233.33.188:6789
2024-01-31 04:31:13.448713 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.233.33.188:6789"]}] data:a=10.233.33.188:6789 mapping:{"node":{"a":{"Name":"mongodb2","Hostname":"mongodb2","Address":"172.26.1.57"},"b":{"Name":"worker02","Hostname":"worker02","Address":"172.26.0.237"},"c":{"Name":"worker01","Hostname":"worker01","Address":"172.26.0.233"}}} maxMonId:0]
2024-01-31 04:31:13.459670 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2024-01-31 04:31:13.459889 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2024-01-31 04:31:14.413449 I | op-mon: targeting the mon count 3
2024-01-31 04:31:14.419264 I | op-config: setting "global"="mon allow pool delete"="true" option to the mon configuration database
2024-01-31 04:31:29.420984 I | exec: timeout waiting for process ceph to return. Sending interrupt signal to the process
2024-01-31 04:31:29.424714 I | op-config: setting "global"="mon cluster log file"="" option to the mon configuration database
2024-01-31 04:31:44.426183 I | exec: timeout waiting for process ceph to return. Sending interrupt signal to the process
2024-01-31 04:31:44.429829 I | op-config: setting "global"="mon allow pool size one"="true" option to the mon configuration database
2024-01-31 04:31:59.431412 I | exec: timeout waiting for process ceph to return. Sending interrupt signal to the process
2024-01-31 04:31:59.434668 I | op-config: setting "global"="osd scrub auto repair"="true" option to the mon configuration database
2024-01-31 04:32:14.435631 I | exec: timeout waiting for process ceph to return. Sending interrupt signal to the process
2024-01-31 04:32:14.439200 W | op-mon: failed to set Rook and/or user-defined Ceph config options before starting mons; will retry after starting mons. failed to apply default Ceph configurations: failed to set one or more Ceph configs: failed to set ceph config in the centralized mon configuration database; you may need to use the rook-config-override ConfigMap. output: Cluster connection aborted: exit status 1: failed to set ceph config in the centralized mon configuration database; you may need to use the rook-config-override ConfigMap. output: Cluster connection aborted: exit status 1: failed to set ceph config in the centralized mon configuration database; you may need to use the rook-config-override ConfigMap. output: Cluster connection aborted: exit status 1: failed to set ceph config in the centralized mon configuration database; you may need to use the rook-config-override ConfigMap. output: Cluster connection aborted: exit status 1
2024-01-31 04:32:14.439236 I | op-mon: creating mon b
2024-01-31 04:32:14.495063 I | op-mon: mon "a" endpoint is [v2:10.233.33.188:3300,v1:10.233.33.188:6789]
2024-01-31 04:32:14.513819 I | op-mon: mon "b" endpoint is [v2:10.233.14.130:3300,v1:10.233.14.130:6789]
2024-01-31 04:32:14.536604 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.233.33.188:6789","10.233.14.130:6789"]}] data:b=10.233.14.130:6789,a=10.233.33.188:6789 mapping:{"node":{"a":{"Name":"mongodb2","Hostname":"mongodb2","Address":"172.26.1.57"},"b":{"Name":"worker02","Hostname":"worker02","Address":"172.26.0.237"},"c":{"Name":"worker01","Hostname":"worker01","Address":"172.26.0.233"}}} maxMonId:0]
2024-01-31 04:32:14.644393 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2024-01-31 04:32:14.644595 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2024-01-31 04:32:15.054144 I | op-mon: 1 of 2 expected mon deployments exist. creating new deployment(s).
2024-01-31 04:32:15.061852 I | op-mon: deployment for mon rook-ceph-mon-a already exists. updating if needed
2024-01-31 04:32:15.078219 I | op-k8sutil: deployment "rook-ceph-mon-a" did not change, nothing to update
2024-01-31 04:32:15.244269 I | op-mon: updating maxMonID from 0 to 1 after committing mon "b"
2024-01-31 04:32:16.048829 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.233.33.188:6789","10.233.14.130:6789"]}] data:a=10.233.33.188:6789,b=10.233.14.130:6789 mapping:{"node":{"a":{"Name":"mongodb2","Hostname":"mongodb2","Address":"172.26.1.57"},"b":{"Name":"worker02","Hostname":"worker02","Address":"172.26.0.237"},"c":{"Name":"worker01","Hostname":"worker01","Address":"172.26.0.233"}}} maxMonId:1]
2024-01-31 04:32:16.048875 I | op-mon: waiting for mon quorum with [a b]
2024-01-31 04:32:16.450889 I | op-mon: mon b is not yet running
2024-01-31 04:32:16.450964 I | op-mon: mons running: [a]
2024-01-31 04:32:36.602987 I | op-mon: mons running: [a b]
2024-01-31 04:32:56.749363 I | op-mon: mons running: [a b]
2024-01-31 04:33:16.905039 I | op-mon: mons running: [a b]
2024-01-31 04:33:37.056411 I | op-mon: mons running: [a b]
2024-01-31 04:33:57.198993 I | op-mon: mons running: [a b]
2024-01-31 04:34:17.349799 I | op-mon: mons running: [a b]
2024-01-31 04:34:37.485499 I | op-mon: mons running: [a b]
2024-01-31 04:34:57.624542 I | op-mon: mons running: [a b]
2024-01-31 04:35:17.765391 I | op-mon: mons running: [a b]
2024-01-31 04:35:37.911444 I | op-mon: mons running: [a b]
2024-01-31 04:35:58.051425 I | op-mon: mons running: [a b]
2024-01-31 04:36:18.192221 I | op-mon: mons running: [a b]
2024-01-31 04:36:38.337863 I | op-mon: mons running: [a b]
2024-01-31 04:36:58.482764 I | op-mon: mons running: [a b]
2024-01-31 04:37:18.628753 I | op-mon: mons running: [a b]
2024-01-31 04:37:38.767484 I | op-mon: mons running: [a b]
2024-01-31 04:37:58.908104 I | op-mon: mons running: [a b]
2024-01-31 04:38:19.043427 I | op-mon: mons running: [a b]
2024-01-31 04:38:39.183066 I | op-mon: mons running: [a b]
2024-01-31 04:38:59.324757 I | op-mon: mons running: [a b]
2024-01-31 04:39:19.462919 I | op-mon: mons running: [a b]
2024-01-31 04:39:39.600750 I | op-mon: mons running: [a b]
2024-01-31 04:39:59.745682 I | op-mon: mons running: [a b]
2024-01-31 04:40:19.879803 I | op-mon: mons running: [a b]
2024-01-31 04:40:40.025388 I | op-mon: mons running: [a b]
2024-01-31 04:41:00.169048 I | op-mon: mons running: [a b]
2024-01-31 04:41:20.320758 I | op-mon: mons running: [a b]
2024-01-31 04:41:40.470972 I | op-mon: mons running: [a b]
2024-01-31 04:42:00.612143 I | op-mon: mons running: [a b]
2024-01-31 04:42:15.750751 E | ceph-cluster-controller: failed to reconcile CephCluster "rook-ceph/rook-ceph". failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed to create cluster: failed to start ceph monitors: failed to start mon pods: failed to check mon quorum b: failed to wait for mon quorum: exceeded max retry count waiting for monitors to reach quorum
2024-01-31 04:42:15.750845 I | op-k8sutil: Reporting Event rook-ceph:rook-ceph Warning:ReconcileFailed:failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed to create cluster: failed to start ceph monitors: failed to start mon pods: failed to check mon quorum b: failed to wait for mon quorum: exceeded max retry count waiting for monitors to reach quorum
2024-01-31 04:42:15.761726 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph"
2024-01-31 04:42:15.770923 I | op-mon: parsing mon endpoints: a=10.233.33.188:6789,b=10.233.14.130:6789
2024-01-31 04:42:15.799718 I | ceph-spec: detecting the ceph image version for image docker-registry.com:5000/ceph/ceph:v16.2.7...
2024-01-31 04:42:18.900022 I | ceph-spec: detected ceph image version: "16.2.7-0 pacific"
2024-01-31 04:42:18.900068 I | ceph-cluster-controller: validating ceph version from provided image
2024-01-31 04:42:18.906217 I | op-mon: parsing mon endpoints: a=10.233.33.188:6789,b=10.233.14.130:6789
2024-01-31 04:42:18.910053 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2024-01-31 04:42:18.910185 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2024-01-31 04:42:34.026304 E | ceph-cluster-controller: failed to get ceph daemons versions, this typically happens during the first cluster initialization. failed to run 'ceph versions'. . timed out: exit status 1
2024-01-31 04:42:34.026355 I | ceph-cluster-controller: cluster "rook-ceph": version "16.2.7-0 pacific" detected for image "docker-registry.com:5000/ceph/ceph:v16.2.7"
2024-01-31 04:42:34.085987 I | op-mon: start running mons
2024-01-31 04:42:34.093018 I | op-mon: parsing mon endpoints: a=10.233.33.188:6789,b=10.233.14.130:6789
2024-01-31 04:42:34.110632 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.233.14.130:6789","10.233.33.188:6789"]}] data:a=10.233.33.188:6789,b=10.233.14.130:6789 mapping:{"node":{"a":{"Name":"mongodb2","Hostname":"mongodb2","Address":"172.26.1.57"},"b":{"Name":"worker02","Hostname":"worker02","Address":"172.26.0.237"},"c":{"Name":"worker01","Hostname":"worker01","Address":"172.26.0.233"}}} maxMonId:1]
2024-01-31 04:42:34.122766 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2024-01-31 04:42:34.122935 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2024-01-31 04:42:35.074100 I | op-mon: targeting the mon count 3
2024-01-31 04:42:35.079594 I | op-config: setting "global"="mon allow pool delete"="true" option to the mon configuration database
2024-01-31 04:42:50.081302 I | exec: timeout waiting for process ceph to return. Sending interrupt signal to the process
2024-01-31 04:42:50.084728 I | op-config: setting "global"="mon cluster log file"="" option to the mon configuration database
2024-01-31 04:43:05.085764 I | exec: timeout waiting for process ceph to return. Sending interrupt signal to the process
2024-01-31 04:43:05.089579 I | op-config: setting "global"="mon allow pool size one"="true" option to the mon configuration database
2024-01-31 04:43:20.091245 I | exec: timeout waiting for process ceph to return. Sending interrupt signal to the process
2024-01-31 04:43:20.095526 I | op-config: setting "global"="osd scrub auto repair"="true" option to the mon configuration database
2024-01-31 04:43:35.096510 I | exec: timeout waiting for process ceph to return. Sending interrupt signal to the process
2024-01-31 04:43:35.099940 W | op-mon: failed to set Rook and/or user-defined Ceph config options before starting mons; will retry after starting mons. failed to apply default Ceph configurations: failed to set one or more Ceph configs: failed to set ceph config in the centralized mon configuration database; you may need to use the rook-config-override ConfigMap. output: Cluster connection aborted: exit status 1: failed to set ceph config in the centralized mon configuration database; you may need to use the rook-config-override ConfigMap. output: Cluster connection aborted: exit status 1: failed to set ceph config in the centralized mon configuration database; you may need to use the rook-config-override ConfigMap. output: Cluster connection aborted: exit status 1: failed to set ceph config in the centralized mon configuration database; you may need to use the rook-config-override ConfigMap. output: Cluster connection aborted: exit status 1
2024-01-31 04:43:35.099980 I | op-mon: creating mon c
2024-01-31 04:43:35.140863 I | op-mon: mon "a" endpoint is [v2:10.233.33.188:3300,v1:10.233.33.188:6789]
2024-01-31 04:43:35.186554 I | op-mon: mon "b" endpoint is [v2:10.233.14.130:3300,v1:10.233.14.130:6789]
2024-01-31 04:43:35.203955 I | op-mon: mon "c" endpoint is [v2:10.233.22.49:3300,v1:10.233.22.49:6789]
2024-01-31 04:43:35.508172 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.233.14.130:6789","10.233.22.49:6789","10.233.33.188:6789"]}] data:c=10.233.22.49:6789,a=10.233.33.188:6789,b=10.233.14.130:6789 mapping:{"node":{"a":{"Name":"mongodb2","Hostname":"mongodb2","Address":"172.26.1.57"},"b":{"Name":"worker02","Hostname":"worker02","Address":"172.26.0.237"},"c":{"Name":"worker01","Hostname":"worker01","Address":"172.26.0.233"}}} maxMonId:1]
2024-01-31 04:43:36.105410 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2024-01-31 04:43:36.105696 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2024-01-31 04:43:36.520029 I | op-mon: 2 of 3 expected mon deployments exist. creating new deployment(s).
2024-01-31 04:43:36.565653 I | op-mon: deployment for mon rook-ceph-mon-a already exists. updating if needed
2024-01-31 04:43:36.579723 I | op-k8sutil: deployment "rook-ceph-mon-a" did not change, nothing to update
2024-01-31 04:43:36.587462 I | op-mon: deployment for mon rook-ceph-mon-b already exists. updating if needed
2024-01-31 04:43:36.603004 I | op-k8sutil: deployment "rook-ceph-mon-b" did not change, nothing to update
2024-01-31 04:43:36.710473 I | op-mon: updating maxMonID from 1 to 2 after committing mon "c"
2024-01-31 04:43:37.509348 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.233.33.188:6789","10.233.14.130:6789","10.233.22.49:6789"]}] data:a=10.233.33.188:6789,b=10.233.14.130:6789,c=10.233.22.49:6789 mapping:{"node":{"a":{"Name":"mongodb2","Hostname":"mongodb2","Address":"172.26.1.57"},"b":{"Name":"worker02","Hostname":"worker02","Address":"172.26.0.237"},"c":{"Name":"worker01","Hostname":"worker01","Address":"172.26.0.233"}}} maxMonId:2]
2024-01-31 04:43:37.509400 I | op-mon: waiting for mon quorum with [a b c]
2024-01-31 04:43:38.112951 I | op-mon: mon c is not yet running
2024-01-31 04:43:38.113010 I | op-mon: mons running: [a b]
2024-01-31 04:43:58.263454 I | op-mon: mons running: [a b c]
2024-01-31 04:44:18.414507 I | op-mon: mons running: [a b c]
2024-01-31 04:44:38.558758 I | op-mon: mons running: [a b c]
2024-01-31 04:44:58.713984 I | op-mon: mons running: [a b c]
2024-01-31 04:45:18.864728 I | op-mon: mons running: [a b c]
2024-01-31 04:45:39.013296 I | op-mon: mons running: [a b c]
2024-01-31 04:45:59.171675 I | op-mon: mons running: [a b c]
2024-01-31 04:46:19.334523 I | op-mon: mons running: [a b c]
2024-01-31 04:46:39.486447 I | op-mon: mons running: [a b c]
2024-01-31 04:46:59.636838 I | op-mon: mons running: [a b c]
2024-01-31 04:47:19.791180 I | op-mon: mons running: [a b c]
2024-01-31 04:47:39.951070 I | op-mon: mons running: [a b c]
2024-01-31 04:48:00.102725 I | op-mon: mons running: [a b c]
2024-01-31 04:48:20.252233 I | op-mon: mons running: [a b c]
2024-01-31 04:48:40.411627 I | op-mon: mons running: [a b c]
2024-01-31 04:49:00.557747 I | op-mon: mons running: [a b c]
2024-01-31 04:49:20.717466 I | op-mon: mons running: [a b c]
2024-01-31 04:49:40.867389 I | op-mon: mons running: [a b c]
2024-01-31 04:50:01.036149 I | op-mon: mons running: [a b c]
2024-01-31 04:50:21.194084 I | op-mon: mons running: [a b c]
2024-01-31 04:50:41.344770 I | op-mon: mons running: [a b c]
2024-01-31 04:51:01.498813 I | op-mon: mons running: [a b c]
2024-01-31 04:51:21.662478 I | op-mon: mons running: [a b c]
2024-01-31 04:51:41.829757 I | op-mon: mons running: [a b c]
2024-01-31 04:52:01.982253 I | op-mon: mons running: [a b c]
2024-01-31 04:52:22.148524 I | op-mon: mons running: [a b c]
2024-01-31 04:52:42.305792 I | op-mon: mons running: [a b c]
2024-01-31 04:53:02.457518 I | op-mon: mons running: [a b c]
2024-01-31 04:53:22.610609 I | op-mon: mons running: [a b c]

2024-01-31 05:25:47.045573 I | op-mon: mons running: [b c a]
2024-01-31 05:26:07.204213 I | op-mon: mons running: [b c a]
2024-01-31 05:26:27.361546 I | op-mon: mons running: [b c a]
2024-01-31 05:26:47.510772 I | op-mon: mons running: [b c a]
2024-01-31 05:27:07.666803 I | op-mon: mons running: [b c a]
2024-01-31 05:27:27.826595 I | op-mon: mons running: [b c a]
2024-01-31 05:27:42.967237 E | ceph-cluster-controller: failed to reconcile CephCluster "rook-ceph/rook-ceph". failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed to create cluster: failed to start ceph monitors: failed to start mon pods: failed to check mon quorum b: failed to wait for mon quorum: exceeded max retry count waiting for monitors to reach quorum
2024-01-31 05:27:42.967310 I | op-k8sutil: Reporting Event rook-ceph:rook-ceph Warning:ReconcileFailed:failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed to create cluster: failed to start ceph monitors: failed to start mon pods: failed to check mon quorum b: failed to wait for mon quorum: exceeded max retry count waiting for monitors to reach quorum
2024-01-31 05:27:43.127788 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph"
2024-01-31 05:27:43.137932 I | op-mon: parsing mon endpoints: a=10.233.33.188:6789,b=10.233.14.130:6789,c=10.233.22.49:6789
2024-01-31 05:27:43.163665 I | ceph-spec: detecting the ceph image version for image docker-registry.com:5000/ceph/ceph:v16.2.7...
2024-01-31 05:27:46.285565 I | ceph-spec: detected ceph image version: "16.2.7-0 pacific"
2024-01-31 05:27:46.285613 I | ceph-cluster-controller: validating ceph version from provided image
2024-01-31 05:27:46.291981 I | op-mon: parsing mon endpoints: a=10.233.33.188:6789,b=10.233.14.130:6789,c=10.233.22.49:6789
2024-01-31 05:27:46.296179 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2024-01-31 05:27:46.296409 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2024-01-31 05:28:01.415700 E | ceph-cluster-controller: failed to get ceph daemons versions, this typically happens during the first cluster initialization. failed to run 'ceph versions'. . timed out: exit status 1
2024-01-31 05:28:01.415788 I | ceph-cluster-controller: cluster "rook-ceph": version "16.2.7-0 pacific" detected for image "docker-registry.com:5000/ceph/ceph:v16.2.7"
2024-01-31 05:28:01.487004 I | op-mon: start running mons
2024-01-31 05:28:01.494202 I | op-mon: parsing mon endpoints: a=10.233.33.188:6789,b=10.233.14.130:6789,c=10.233.22.49:6789
2024-01-31 05:28:01.510663 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.233.33.188:6789","10.233.14.130:6789","10.233.22.49:6789"]}] data:a=10.233.33.188:6789,b=10.233.14.130:6789,c=10.233.22.49:6789 mapping:{"node":{"a":{"Name":"mongodb2","Hostname":"mongodb2","Address":"172.26.1.57"},"b":{"Name":"worker02","Hostname":"worker02","Address":"172.26.0.237"},"c":{"Name":"worker01","Hostname":"worker01","Address":"172.26.0.233"}}} maxMonId:2]
2024-01-31 05:28:01.522281 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2024-01-31 05:28:01.522488 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2024-01-31 05:28:02.466518 I | op-mon: targeting the mon count 3
2024-01-31 05:28:02.473337 I | op-config: setting "global"="mon allow pool delete"="true" option to the mon configuration database
2024-01-31 05:28:17.474890 I | exec: timeout waiting for process ceph to return. Sending interrupt signal to the process
2024-01-31 05:28:17.478403 I | op-config: setting "global"="mon cluster log file"="" option to the mon configuration database
2024-01-31 05:28:32.479729 I | exec: timeout waiting for process ceph to return. Sending interrupt signal to the process
2024-01-31 05:28:32.483738 I | op-config: setting "global"="mon allow pool size one"="true" option to the mon configuration database
2024-01-31 05:28:47.485210 I | exec: timeout waiting for process ceph to return. Sending interrupt signal to the process
2024-01-31 05:28:47.488885 I | op-config: setting "global"="osd scrub auto repair"="true" option to the mon configuration database
2024-01-31 05:29:02.490181 I | exec: timeout waiting for process ceph to return. Sending interrupt signal to the process
2024-01-31 05:29:02.494189 W | op-mon: failed to set Rook and/or user-defined Ceph config options before starting mons; will retry after starting mons. failed to apply default Ceph configurations: failed to set one or more Ceph configs: failed to set ceph config in the centralized mon configuration database; you may need to use the rook-config-override ConfigMap. output: Cluster connection aborted: exit status 1: failed to set ceph config in the centralized mon configuration database; you may need to use the rook-config-override ConfigMap. output: Cluster connection aborted: exit status 1: failed to set ceph config in the centralized mon configuration database; you may need to use the rook-config-override ConfigMap. output: Cluster connection aborted: exit status 1: failed to set ceph config in the centralized mon configuration database; you may need to use the rook-config-override 

Expected behavior:

How to reproduce it (minimal and precise):

File(s) to submit:

Logs to submit:

Cluster Status to submit:

Environment:

vadlakiran commented 7 months ago

@travisn can you please help me on this

vadlakiran commented 7 months ago

i have observered the issue and found that this "rook-ceph-pdbstatemap" configmap is not creating it , i have created manully but still no luck, any can help on this how to resolve the issue

sp98 commented 7 months ago

i have observered the issue and found that this "rook-ceph-pdbstatemap" configmap is not creating it , i have created manully but still no luck, any can help on this how to resolve the issue

@vadlakiran Thats an older version of Rook your are using. Can you try with latest version?

vadlakiran commented 7 months ago

@sp98 we still not upgraded to latest version, we are using it v1.8.1 in production environment, how can i fix this in this ?

sp98 commented 7 months ago

failed to start ceph monitors: failed to start mon pods: failed to check mon quorum a: failed to wait for mon quorum: exceeded max retry count waiting for monitors to reach quorum

Did you remove and install rook again on the same cluster?

sp98 commented 7 months ago

@vadlakiran Few questions:

github-actions[bot] commented 5 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

github-actions[bot] commented 5 months ago

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.