An improvement on the visibility of the status of the ACK custom resources on Kubernetes will be very helpful to identify failed resources fast and debug why they have failed.
In GCP config-connector when kubectl get is ran there is visibility on the status of the resource. For example:
kubectl get pubsubtopics
NAME AGE READY STATUS STATUS AGE
pubsubtopic.pubsub.cnrm.cloud.google.com/int-pubsub-1 334d True UpToDate 145d
pubsubtopic.pubsub.cnrm.cloud.google.com/int-pubsub-2 329d False Updating 1s
pubsubtopic.pubsub.cnrm.cloud.google.com/int-pubsub-3 13d False UpdateFailed 13d
...
While, in ACK even with the -o wide flag we do not get any more useful information on the resource. For example:
kubectl get buckets -o wide
NAME AGE
my-ack-s3-bucket-us-east-1 37d
my-ack-s3-bucket-us-west-2 37d
...
Describe the solution you'd like
We would like to have visibility on what the status of the resource is when all the resources are listed using a kubectl get command. Alternative solutions which provide a quick and easy way to check the status of all resources would also be viable.
Describe alternatives you've considered
We could run a kubectl describe on every single resource to check its state but this quickly becomes unfeasible when managing an environment with thousands of resources. Our scale can go above 10,000 resources in some environments.
Is your feature request related to a problem?
An improvement on the visibility of the status of the ACK custom resources on Kubernetes will be very helpful to identify failed resources fast and debug why they have failed.
In GCP config-connector when
kubectl get
is ran there is visibility on the status of the resource. For example:While, in ACK even with the
-o wide
flag we do not get any more useful information on the resource. For example:Describe the solution you'd like
We would like to have visibility on what the status of the resource is when all the resources are listed using a
kubectl get
command. Alternative solutions which provide a quick and easy way to check the status of all resources would also be viable.Describe alternatives you've considered
We could run a
kubectl describe
on every single resource to check its state but this quickly becomes unfeasible when managing an environment with thousands of resources. Our scale can go above 10,000 resources in some environments.