Open leonp-c opened 3 months ago
This seems to be a server-side issue. Have you verified if kubectl has the same problem?
using kubectl returned all values as expected
Hi @leonp-c,
I tried to reproduce the issue you reported, and everything worked as expected on my end. Here’s what I did:
If everything looks correct and the issue persists, feel free to share more details about your setup,
Hi @Bhargav-manepalli , It seems that the issue was related to hikaru module which i used to parse the yaml and create the resource. hikaru is removing/ignoring the empty dictionary field from the v1 object. A bug was opened on their git hikaru-43 Thank you for your effort.
**What happened: Registering a CRD with:
is not registered in k8s. after testing from command line using
kubectl get crd some.custom.crd.ai -o yaml
the result yaml is:status is missing
What you expected to happen: status should exist so that using kubernetes command (kubernnetes package):
custom_objects_api.get_namespaced_custom_object(group=self.group, version=self.version, namespace=self.namespace, plural=self.plural, name=self.name)
would workHow to reproduce it (as minimally and precisely as possible): Deploy a
CustomResourceDefinition
resource that hasspec.versions.subresources.status
as {} (dict) check the deployed CRD resource yamlget crd some.resource.name.ai -o yaml
Anything else we need to know?: Tried to downgrade to kubernetes 28.1.0, same result to comply to hikaru version (1.3.0)
Environment:
kubectl version
):python --version
): 3.10.12pip list | grep kubernetes
): 30.1.0