kubernetes / client-go

Go client for Kubernetes.
Apache License 2.0
9.13k stars 2.95k forks source link

Server-Side Apply: ExtractPod should specify `UID` #1346

Closed gnuletik closed 2 months ago

gnuletik commented 7 months ago

TLDR

When using server side apply with ExtractPod, a controller may recreate a resource that was deleted by attempting to remove a finalizer.

Context

I have a controller that watches pods to remove finalizer using Server-Side Apply (to avoid race condition between controllers as suggested here).

It removes the finalizer in the following way:

podApplyConfig, err := coreac.ExtractPod(pod, fm)
if err != nil {
        return fmt.Errorf("coreac.ExtractPod: %w", err)
}

// updating something
podApplyConfig.Finalizers = nil

_, err = w.client.CoreV1().Pods(ns).Apply(ctx, podApplyConfig, meta.ApplyOptions{FieldManager: smtg})
if err != nil {
        return fmt.Errorf("client.Apply: %w", err)
}

The SharedInformer sometimes sends the event multiple times, which leads to pod being recreated after deletion, given this scenario:

Fix

Kubernetes specs specify that:

Create is blocked on apply if uid is provided link

See KEPS

We can manually call podApply := podApply.WithUID(pod.UID) to workaround this.

However, I think that it would be better that ExtractPod (and similar method for other resources) sets the UID before returning it.

https://github.com/kubernetes/client-go/blob/aa7909e7d7c0661792ba21b9e882f3cd6ad0ce53/applyconfigurations/core/v1/pod.go#L72-L84

WDYT?

I can make a PR if that sounds good to you. However, this part of the codebase seems to be generated, so I'd need a hint on where to edit that.

Thanks!

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 2 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/client-go/issues/1346#issuecomment-2323451755): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
guettli commented 3 weeks ago

It is a bit sad, that there was no single reply to that issue. I don't face the issue at the moment, but I have concerns to run into that sooner or later.

@gnuletik did you find an answer somewhere else?

gnuletik commented 3 weeks ago

@guettli When using ExtractPod (or similar functions), I manually set the UID with the following function:

podApply := podApply.WithUID(pod.UID)

I also stopped using ExtractPod functions (and similar) when I can, and instead I "rebuild" the apply specifications.

guettli commented 3 weeks ago

@gnuletik just out of curiosity, how do you "rebuild" the apply spec?

I do it like that. But I am unsure if this is good, today is the first time that I use SSA from Go.

    secret := &corev1.Secret{
        ObjectMeta: metav1.ObjectMeta{
            Name:      containerRegistrySecretName,
            Namespace: mgtSystemNamespace,
        },
        TypeMeta: metav1.TypeMeta{
            APIVersion: "v1",
            Kind:       "Secret",
        },
        Type: corev1.SecretTypeDockerConfigJson,
        Data: map[string][]byte{
            ".dockerconfigjson": alm.secret.Data[".dockerconfigjson"],
        },
    }
    data, err := json.Marshal(secret)
    if err != nil {
        return fmt.Errorf("failed to marshal secret: %w", err)
    }

    _, err = clientSet.CoreV1().Secrets(secret.Namespace).Patch(
        ctx,
        secret.Name,
        types.ApplyPatchType,
        data,
        metav1.PatchOptions{
            FieldManager: fieldManagerName,
        },
    )
    if err != nil {
        return fmt.Errorf("failed to apply secret: %w", err)
    }
gnuletik commented 3 weeks ago

@guettli your code sample is not using Server Side Apply.

You need to use apply configuration API: https://pkg.go.dev/k8s.io/client-go@v0.31.2/applyconfigurations/core/v1#SecretApplyConfiguration

e.g.

import (
    coreac "k8s.io/client-go/applyconfigurations/core/v1"
    core "k8s.io/api/core/v1"
)

secret := coreac.Secret().WithType(core.SecretTypeDockerConfigJson).WithName(containerRegistrySecretName).WithNamespace(mgtSystemNamespace)
// etc...