Closed frenkdefrog closed 3 months ago
Hi @frenkdefrog , I don't think this is a bug, this feature AvailableReplicas
is supported in native kubernetes since version 1.22, you can find it from here.
Since your kubernetes version is 1.21, I propose that you can use kuberentes python-client version 21.0.0 to avoid this issue
Thanks @showjason. @frenkdefrog Please update your client version and see if the issue is fixed. Thanks!
/assign @showjason
Kubernetes version (kubectl version): 1.22 OS (e.g., MacOS 10.13.6): Monterey Python version (python --version): 3.9.10 Python client version (pip list | grep kubernetes):22.0.4
I am also facing the same issue. list_namespaced_stateful_set also gives the same error
Hi @atulGupta2922 , please follow this comment to debug your issue, to check if kubernetes responded with available_replicas
.
BTW, I do not find kubernetes-client version 22.0.4, can you check it again?
I had this issue and downgrading to 21.7.0
appears to have fixed it. FWIW, I think this library should maintain backward compatibility and not error when these circumstances are encountered from an older version of kubernetes.
Any update on this?
I would also like an update on this. We were using 22.6.0 and see this issue after upgrading to 23.3.0. Since we cannot control the kubernetes version deployed at our customer sites, it is important that this client library be backward compatible. This is a blocker for me to upgrade to 23.3.0 until this is resolved.
status.availableReplicas is feature gated as documented here https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/#StatefulSetStatus Hence it is not guaranteed to be available on StatefulSet/status objects.
Also it appears to be the only field the client insists is not None on a StatefulSet/status object, which is odd.
I have a 1.22 cluster, and if there are ready pods in the statefulset, then available_replicas
has a value and works.
But, if you scale the set down to 0 this value is removed from the API response and triggers this None exception. It is distasteful that blows up this way instead of letting it be None and letting developers interpret that.
Since we're on the topic of distasteful: In order to have reliable usage, I'm going to have to subprocess out to kubectl
in order to work with statefulsets.
kubernetes client 23.3.0 to 21.7.0 help me resolved this issue
I was getting this assertion even when all pods in the statefulset were ready with 23.3.0. This issue was introduced with 23.3.0, all previous k8s-client releases worked fine - I had to backout to 22.6.0 and that works.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What happened (please include outputs or screenshots): Traceback (most recent call last): File "/Users/frenkdefrog/DevOps/poc/getresources.py", line 99, in
main()
File "/Users/frenkdefrog/DevOps/poc/getresources.py", line 83, in main
eksResult=eks.retrieveResource(resource, resourceToCheck[resource])
File "/Users/frenkdefrog/DevOps/poc/basic.py", line 57, in retrieveResource
return userFunc(apiClient)
File "/Users/frenkdefrog/DevOps/poc/basic.py", line 18, in retrieveStatefulsets
return client.list_stateful_set_for_all_namespaces()
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/api/apps_v1_api.py", line 3991, in list_stateful_set_for_all_namespaces
return self.list_stateful_set_for_all_namespaces_with_http_info(kwargs) # noqa: E501
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/api/apps_v1_api.py", line 4098, in list_stateful_set_for_all_namespaces_with_http_info
return self.api_client.call_api(
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.call_api(resource_path, method,
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 192, in __call_api
return_data = self.deserialize(response_data, response_type)
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 264, in deserialize
return self.deserialize(data, response_type)
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 303, in deserialize
return self.deserialize_model(data, klass)
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 639, in deserialize_model
kwargs[attr] = self.deserialize(value, attr_type)
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 280, in deserialize
return [self.__deserialize(sub_data, sub_kls)
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 280, in
return [self. deserialize(sub_data, sub_kls)
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 303, in deserialize
return self.deserialize_model(data, klass)
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 639, in __deserialize_model
kwargs[attr] = self.deserialize(value, attr_type)
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 303, in deserialize
return self.deserialize_model(data, klass)
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 641, in deserialize_model
instance = klass(kwargs)
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/models/v1_stateful_set_status.py", line 79, in init
self.available_replicas = available_replicas
File "/Users/frenkdefrog/DevOps/poc/.env/lib/python3.9/site-packages/kubernetes/client/models/v1_stateful_set_status.py", line 119, in available_replicas
raise ValueError("Invalid value for
available_replicas
, must not beNone
") # noqa: E501 ValueError: Invalid value foravailable_replicas
, must not beNone
What you expected to happen: I wanted to gather all the statefulsets.
How to reproduce it (as minimally and precisely as possible): apiClient=client.CoreV1Api result = apiClient.list_stateful_set_for_all_namespaces()
Anything else we need to know?:
Environment:
kubectl version
): 1.21python --version
): 3.9.10pip list | grep kubernetes
):23.0.0-snapshot