kubernetes / client-go

Go client for Kubernetes.
Apache License 2.0
9.13k stars 2.95k forks source link

apimachinery.ExtractInto generates diff #1339

Closed gnuletik closed 2 months ago

gnuletik commented 8 months ago

When creating a pod:

pod := coreac.Pod(name, ns).
    WithSpec(
        coreac.PodSpec().
        coreac.WithAffinity(coreac.Affinity()) // notice the empty affinity value here
        // other spec values omitted for brevity
    )

pod, err := client.CoreV1().Pods(ns).Apply(ctx, pod, meta.ApplyOptions{FieldManager: smtg})
if err != nil {
    return fmt.Errorf("pod.Apply: %w", err)
}

When trying to later update it by using ExtractPodInfo:

pod, err = w.client.CoreV1().Pods(ns).Get(ctx, podName, meta.GetOptions{})
if err != nil {
        return fmt.Errorf("client.Get: %w", err)
}

podApplyConfig, err := coreac.ExtractPod(pod, fm)
if err != nil {
        return fmt.Errorf("coreac.ExtractPod: %w", err)
}

// updating something
podApplyConfig.Finalizers = nil

_, err = w.client.CoreV1().Pods(ns).Apply(ctx, podApplyConfig, meta.ApplyOptions{FieldManager: smtg})
if err != nil {
        return fmt.Errorf("client.Apply: %w", err)
}

The Apply operation fails with the following error:

If the object is empty, we avoid setting an empty affinity, which leads to issue
while patching the pod later.

client.Apply: Pod "pod-name" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`,`spec.initContainers[*].image`,`spec.a
ctiveDeadlineSeconds`,`spec.tolerations` (only additions to existing tolerations),`spec.terminationGracePeriodSeconds` (allow it to be set to 1 if it was previously negative)
core.PodSpec{
      ... // 15 identical fields
      Subdomain:         "",
      SetHostnameAsFQDN: nil,
-    Affinity:          &core.Affinity{},
+    Affinity:          nil,
      SchedulerName:     "default-scheduler",
      ... // 13 identical fields
}

Using k8s.io/client-go v0.28.8

This can be fixed by setting the affinity to nil when creating the pod (instead of setting the zero-value). However, I'm wondering if this should be handled by client-go (this seems to be implemented in apimachinery).

What do you think?

crudbetter commented 7 months ago

I've been keen to learn Go and more about Kubernetes - so challenged myself to put together a reproduction of this.

How do we progress to getting an opinion on whether there is anything to fix in client-go? @liggitt you seem active in this project recently - any ideas? Is this issue raised in the wrong project?

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 2 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/client-go/issues/1339#issuecomment-2336052968): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.