Open andy108369 opened 1 year ago
The akash-nodes ceph pool references have been removed from documentation.
Leaving issue open as the second piece of sending instructions to current/pre-existing providers for removal of askash-nodes pool has not been completed.
@chainzero I've created and tested the following procedure on the Hurricane provider before I'd rebuild it. We can use it:
akash-nodes
ceph poolrook-ceph-cluster.values.yml
akash-nodes
section from cephBlockPools
The entire akash-nodes
section needs to be removed, example:
- name: akash-nodes
spec:
failureDomain: osd
replicated:
size: 2
parameters:
min_size: "2"
storageClass:
enabled: true
name: akash-nodes
isDefault: false
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
# RBD image format. Defaults to "2".
imageFormat: "2"
# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
imageFeatures: layering
# The secrets contain Ceph admin credentials.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
# Specify the filesystem type of the volume. If not specified, csi-provisioner
# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
# in hyperconverged settings where the volume is mounted on the same node as the osds.
csi.storage.k8s.io/fstype: ext4
In my case it was 1.10.11
$ helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
...
rook-ceph rook-ceph 3 2023-05-04 17:31:55.487906039 +0200 CEST deployed rook-ceph-v1.10.11 v1.10.11
rook-ceph-cluster rook-ceph 4 2023-05-04 13:21:00.196318672 +0200 CEST deployed rook-ceph-cluster-v1.10.11 v1.10.11
Apply the new rook-ceph-cluster.values.yml
config file you have removed the akash-nodes
section from cephBlockPools
:
Make sure to specify the same ceph cluster version you are running! (
1.10.11
in my case)
helm upgrade --create-namespace -n rook-ceph rook-ceph-cluster --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster --version 1.10.11 -f rook-ceph-cluster.values.yml
can see akash-nodes
got deleted:
root@control-01:~# kubectl get events -A --sort-by='.metadata.creationTimestamp'
NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE
rook-ceph 2m19s Normal Deleting cephblockpool/akash-nodes deleting CephBlockPool "rook-ceph/akash-nodes"
rook-ceph 2m19s Normal ReconcileStarted cephblockpool/akash-nodes starting blockpool deletion
rook-ceph 2m11s Normal ReconcileSucceeded cephblockpool/akash-nodes successfully configured CephBlockPool "rook-ceph/akash-nodes"
rook-ceph 2m11s Normal ReconcileSucceeded cephblockpool/akash-nodes successfully removed finalizer
rook-ceph 2m11s Normal ReconcileSucceeded cephblockpool/akash-nodes successfully configured CephBlockPool "rook-ceph/akash-nodes"
can see akash-nodes
is not present anymore:
root@control-01:~# kubectl -n rook-ceph get cephblockpool
NAME PHASE
akash-deployments Ready
kubectl -n rook-ceph get cephclusters
kubectl -n rook-ceph exec -i $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- ceph status
Additional rook-ceph commands you may find useful:
kubectl -n rook-ceph exec -i $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- ceph df
kubectl -n rook-ceph exec -i $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- ceph osd tree
kubectl -n rook-ceph exec -i $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- ceph osd pool autoscale-status
kubectl -n rook-ceph exec -i $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- ceph osd pool ls detail
kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- bash -c 'ceph osd pool ls | while read POOL; do echo "=== pool: $POOL ==="; rbd -p "$POOL" ls | while read VOL; do ceph osd map "$POOL" "$VOL"; done; done'
kubectl -n rook-ceph exec -i $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- ceph pg ls
It looks like
akash-nodes
ceph pool isn't used by anything. As I don't see any reason for it, I propose to remove it (from the docs, and provide the providers with the instructions to remove it).@troian thoughts?
refs. https://github.com/akash-network/support/issues/97