rook / rook

Storage Orchestration for Kubernetes
https://rook.io
Apache License 2.0
12.27k stars 2.68k forks source link

All CRds should output age and other paramters with kubectl #13775

Open parth-gr opened 7 months ago

parth-gr commented 7 months ago

Is this a bug report or feature request?

Some crds output the age and other fields with kubectl

[rider@localhost rbd]$ kubectl get sc
NAME                 PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block      rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   7s

But some not

[rider@localhost rbd]$ kubectl get cephblockpool -nrook-ceph
NAME          PHASE
builtin-mgr   Ready
replicapool   Ready

So this should be consistent and informative

What should the feature do:

Should add the kubectl output with more values, maybe by updating its status

What is use case behind this feature:

Better track of things

Environment:

NymanRobin commented 6 months ago

Hey, @parth-gr I would like to work on this issue! I guess the major part of the problem is coming up with what the relevant information for the output of the get command here is beside the obvious age. If it sounds good to you I could propose something here first and then go to implementation or maybe you already have some ideas about the fields?

For example the cephblockpools could be the following:

$kubectl get cephblockpools -n rook-ceph
NAMESPACE   NAME          PHASE   FAILURE DOMAIN   REPLICATION SIZE   ERASURE CODING   ERASURE CODING DATA CHUNKS   AGE
rook-ceph   replicapool   Ready   host              3                  0                0                            25m

Or maybe the REPLICATION SIZE ERASURE CODING ERASURE CODING DATA CHUNKS could also be behind the -o wide option

I can also look into to the other custom resources that are uninformative

parth-gr commented 6 months ago

@NymanRobin Yes you understand it correctly, before opening a PR I would suggest just discuss on what all fields you are updating with what CR.

and self assign using https://rook.io/docs/rook/latest-release/Contributing/development-flow/#self-assign-issue

parth-gr commented 6 months ago

Or maybe the REPLICATION SIZE ERASURE CODING ERASURE CODING DATA CHUNKS could also be behind the -o wide option

if we can hide that that will be nice

NymanRobin commented 6 months ago

/assign

github-actions[bot] commented 6 months ago

Thanks for taking this issue! Let us know if you have any questions!

NymanRobin commented 6 months ago

@parth-gr I have some initial ideas on what could be done. Please feel free to comment on this with concerns and improvement ideas. Thanks a lot in advance!

Design proposal of new kubectl get output for rook custom resource definitions:

CephBlockPool:

$kubectl get cephblockpools -n rook-ceph -o wide
NAME          PHASE    TYPE          FAILURE DOMAIN   REPLICATION SIZE   ERASURE CODING   ERASURE CODING DATA CHUNKS   AGE
replicapool   Ready    Replicated    host             3                  0                0                            25m
$kubectl get cephblockpools -n rook-ceph
NAME          PHASE    TYPE          FAILURE DOMAIN     AGE
replicapool   Ready    Replicated    host               25m

Writing the "type" (replicated/erasure coded) in the Status could be achieved by the isReplicated function. The failure domain could also be filled out in Status just to ensure that it can explicitly be seen in case the default failure domain is used. Would this be a good approach? For example, using the Status.info fields similarly as in the CephObjectStorage case?

CephObjectStorage:

$ubectl get cephobjectstores -n rook-ceph
NAME          PHASE      PreservePoolsOnDelete      ENDPOINT                                                     AGE
my-store      Ready      True                       http://rook-ceph-rgw-my-store.rook-ceph.svc:80               25m

If the PreservePoolsOnDelete is added to the object store, it would make sense to add it to the filesystem as well. Also, do you think it would make sense to have a field that tells if mirroring is enabled in both CephObjectStorage and CephFileSystem?

The rest of the custom resource definitions missing AGE are as follows: CephObjectStoreUser, CephObjectZoneGroup, CephObjectZone, CephBucketTopic, CephClient, CephRBDMirror, CephFilesystemMirror, CephFilesystemSubVolumeGroup. Maybe for these, I can start by adding AGE, and if needed down the line, the output can be refined more?

parth-gr commented 6 months ago

It looks good to me, you can start with a draft PR

I would like to have everyone else feedback and suggestions on it , @travisn @subhamkrai @sp98 @thotz

thotz commented 6 months ago

@parth-gr I have some initial ideas on what could be done. Please feel free to comment on this with concerns and improvement ideas. Thanks a lot in advance!

Design proposal of new kubectl get output for rook custom resource definitions:

CephBlockPool:

$kubectl get cephblockpools -n rook-ceph -o wide
NAME          PHASE    TYPE          FAILURE DOMAIN   REPLICATION SIZE   ERASURE CODING   ERASURE CODING DATA CHUNKS   AGE
replicapool   Ready    Replicated    host             3                  0                0                            25m
$kubectl get cephblockpools -n rook-ceph
NAME          PHASE    TYPE          FAILURE DOMAIN     AGE
replicapool   Ready    Replicated    host               25m

Writing the "type" (replicated/erasure coded) in the Status could be achieved by the isReplicated function. The failure domain could also be filled out in Status just to ensure that it can explicitly be seen in case the default failure domain is used. Would this be a good approach? For example, using the Status.info fields similarly as in the CephObjectStorage case?

CephObjectStorage:


$ubectl get cephobjectstores -n rook-ceph
NAME          PHASE      PreservePoolsOnDelete      ENDPOINT                                                     AGE
my-store      Ready      True                       http://rook-ceph-rgw-my-store.rook-ceph.svc:80               25m

IMO preserverPoolsOnDelete is not needed instead shows both endpoint secure on nonsecure, and tls cert secret name for secure endpoint



If the PreservePoolsOnDelete is added to the object store, it would make sense to add it to the filesystem as well. Also, do you think it would make sense to have a field that tells if mirroring is enabled in both CephObjectStorage and CephFileSystem?

The rest of the custom resource definitions missing AGE are as follows: `CephObjectStoreUser, CephObjectZoneGroup, CephObjectZone, CephBucketTopic, CephClient, CephRBDMirror, CephFilesystemMirror, CephFilesystemSubVolumeGroup`. Maybe for these, I can start by adding AGE, and if needed down the line, the output can be refined more?
parth-gr commented 6 months ago

@nabokihms how can we call the controller runtime to update status for -o wide?

NymanRobin commented 6 months ago

@parth-gr To make the fields only appear on -o wide I think we can simply set the priority in the printcolumn or did you refer how to populate those value?

NymanRobin commented 6 months ago

@thotz Sure, I do not have any super strong reason to keep the preservePoolsOnDelete. But do you think it is better to display the endpoints in separate columns instead of using one column for both secure and insecure? I guess alternatively a secure column that takes a boolean value could be added 🤔

thotz commented 6 months ago

@thotz Sure, I do not have any super strong reason to keep the preservePoolsOnDelete. But do you think it is better to display the endpoints in separate columns instead of using one column for both secure and insecure? I guess alternatively a secure column that takes a boolean value could be added 🤔

For secure the port will be different, I guess instead of endpoint, lets have host, port , secureport

parth-gr commented 5 months ago

@NymanRobin for the follow up PRs if interested you can target for the radosnamespace and subvolumegroups, for adding more info related to storageclass, etc Similar for object fields

NymanRobin commented 5 months ago

@parth-gr Happy to contribute, so will look into these soon!

NymanRobin commented 5 months ago

Hey @parth-gr! Do you know of anymore CRDs that should / could be improved in the scope of this ticket or should it be closed? If there is more I am happy to help :)

parth-gr commented 5 months ago

Can you try out listing all the CRDs, can see what all it outputs and then we can discuss what all needs to be improved

  - cephclients
  - cephclusters
  - cephblockpools
  - cephfilesystems
  - cephnfses
  - cephobjectstores
  - cephobjectstoreusers
  - cephobjectrealms
  - cephobjectzonegroups
  - cephobjectzones
  - cephbuckettopics
  - cephbucketnotifications
  - cephrbdmirrors
  - cephfilesystemmirrors
  - cephfilesystemsubvolumegroups
  - cephblockpoolradosnamespaces
  - cephcosidrivers
NymanRobin commented 5 months ago

Thanks, sounds like a good exercise! I will get back to you with some suggestions and we can discuss more, if any are needed!

github-actions[bot] commented 3 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

NymanRobin commented 3 months ago

Yes, there is still work left here I have been a bit busy sorry! I have compelled a list of CRDs that I should at least revisit and it looks something like this, these have no or close to no information: cephnfses, CephObjectRealm, cephbucketnotification, CephRBDMirror, cephfilesystemmirro, CephCOSIDriver

However, if someone else is eager to work on this please feel free

github-actions[bot] commented 1 month ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

parth-gr commented 1 month ago

@nabokihms if you are still busy, and this need some work, can we open it other devs to contribute to it?