Closed dtm2451 closed 2 weeks ago
/assign
I will try to reproduce the issue and find the root cause.
after debug, it looks like, in this case, the function sanitize_for_serialization can't serialize the pod_manifest
properly.
For example, the field persistent_volume_claim
can't be mapped to persistentVolumeClaim
, hence persistent_volume_claim
can't be recognized by Kubernetes, and the volume type is set to emptydir
by default.
I will investigate deeply to figure it out.
Alternatively, I think there is another way that you can try, by utilizing the V1Volume and V1PersistentVolumeClaim ... to compose the pod you wanted, similar to this example
@dtm2451, you need to modify the pod_manifest
, all the snake case fields must be converted to camel case.
e.g. persistent_volume_claim
=> persistentVolumeClaim
Or you can choose the alternative I mentioned in former comment.
Alternatively, I think there is another way that you can try, by utilizing the V1Volume and V1PersistentVolumeClaim ... to compose the pod you wanted, similar to this example
Oh oh! Thank you so much for your time investigating.
Sounds like this is user error then on my side, but a warning-add would be nice! I didn't catch that I'd left this portion of my manifest in snake_case rather then camelCase, as is clearly the way all key names work in the python client! Some warning when elements are skipped due to such conversion failure would be VERY nice!
Wait actually, I responded too quickly there.
My understanding of the python client is that fields are designed around snake_case conversions of what one would normally provide in camelCase directly to kubectl. That is what I built towards here -- exactly the path you point towards in:
Alternatively, I think there is another way that you can try, by utilizing the V1Volume and V1PersistentVolumeClaim ... to compose the pod you wanted, similar to this example
So it does still seem like a bug in the client to me if the snake_case version of persistent_volume_claim
is incorrect here!
FWIW, contrary to my understanding from the documentation, but perhaps it is my understanding that is wrong?, when I swap to using camelCase (not the seemingly desired snake_case) for the entirety of my pod_manifest
I can produce the pod I want from kub_cli.create_namespaced_pod(body=pod_manifest, namespace='default')
For example, in what I understand to be documentation of how to define a V1Volume for the python client, the field "persistent_volume_claim" (not "persistentVolumeClaim") is typed as V1PersistentVolumeClaimVolumeSource, and following that link we also find "claim_name" and "read_only" fields (not "claimName" and "readOnly").
@dtm2451, I couldn't agree with you more that the fields are designed around snake_case. For this case, the difference is the type of request body, if the body type is pure json (like hard code), the python client will work as kubectl
, for this kind of cases, it's not reasonable to modify json via python client. If the body is a Kubernetes resource object instantiated by client functions, the snake_case is supported does make sense. I hope my understanding can explain your question!
I'm not sure I quite follow the logic behind
for this kind of cases, it's not reasonable to modify json via python client
fully. Specifically because the case here is a python dict, which is ofc similar to json yet fully python native. I suppose I'm simply curious for more detail of why it becomes unreasonable to parse and modify for the client. Is there a specific function that I should be passing the pod_manifest
dict through before handing it to create_namespaced_pod
perhaps?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What happened (please include outputs or screenshots): A pod is created from the manifest below, but the
volume
meant to target a persistent_volume_claim is instead created as EmptyDir, and container volume_mounts meant to target this volume are skipped entirely.Specifically, when I run
kubectl describe pod/test-pod
, its container has no mount associated with the target name, and I see the below for the volume that should be a PersistentVolumeClaim:What you expected to happen: I expect the pod to be created with
and its container to have
I was able to successfully produce this by spinning up the pod up from an equivalent yml file and using kubectl directly.
How to reproduce it (as minimally and precisely as possible):
Persistent volume config: (Adjust 'path' as needed to something that exists on your k8s node)
Persistent Volume Claim config:
Then run
kubectl create -f path/to/each/config.yml
for both files.kub_cli
here)Anything else we need to know?:
I am newer to working with kubernetes, but I believe this methodology is the intended way to mount persistent storage to pod containers, https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
Environment:
kubectl version
):python --version
): 3.10.12 and 3.8.17. (MRE tested directly only with 3.10.12)pip list | grep kubernetes
): 29.0.0