kubernetes-sigs / controller-runtime

Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)
Apache License 2.0
2.37k stars 1.1k forks source link

Can't delete namespaces from test environment #880

Open ekuefler opened 4 years ago

ekuefler commented 4 years ago

I'd like to use namespaces to isolate my tests from one another. For example, I want to create a new namespace with a random name in my BeforeEach and delete it in AfterEach so that individual tests don't have to worry about cleaning up after themselves.

However, I've observed that namespace deletion doesn't work when using envtest. When I delete a namespace, the deletion timestamp on resources within that namespace remain unset, and the namespace stays in the Terminating phase seemingly forever. The latter is true even if the namespace is empty (e.g. I delete it immediately after creating it).

Is this behavior expected? Is whatever mechanism that normally executes namespace deletion missing from envtest?

djzager commented 4 years ago

Expected behavior https://book.kubebuilder.io/reference/testing/envtest.html?highlight=envtest#testing-considerations

omaskery commented 4 years ago

Whilst I appreciate this is expected behaviour, it would be handy to know what the advice is for separating tests from one another. @ekuefler have you found a way forward?

I too am trying to keep my tests independent in envtest. My operator queries all resources of a given kind, so I need to ensure its queries are not populated with resources from previous tests! :)

Edit: for future time travelling envtest writers, I eventually got what I wanted with the following approach (caveat emptor, I only just wrote this and could find issues, your mileage may vary :smile:):

    var createdResources []runtime.Object

    deleteResourceAfterTest := func(o runtime.Object) {
        createdResources = append(createdResources, o)
    }

    BeforeEach(func() {
        log.Info("resetting created resources list")
        createdResources = nil
    })

    AfterEach(func() {
        for i := len(createdResources) - 1; i >= 0; i-- {
            r := createdResources[i]
            key, err := client.ObjectKeyFromObject(r)
            Expect(err).To(Succeed())
            log.Info("deleting resource", "namespace", key.Namespace, "name", key.Name, "test", CurrentGinkgoTestDescription().FullTestText)
            Expect(k8sClient.Delete(ctx, r)).To(Succeed())

            _, isNamespace := r.(*corev1.Namespace)
            if !isNamespace {
                log.Info("waiting for resource to disappear", "namespace", key.Namespace, "name", key.Name, "test", CurrentGinkgoTestDescription().FullTestText)
                Eventually(func() error {
                    return k8sClient.Get(ctx, key, r)
                }, timeout, interval).Should(HaveOccurred())
                log.Info("deleted resource", "namespace", key.Namespace, "name", key.Name, "test", CurrentGinkgoTestDescription().FullTestText)
            }
        }
    })
DirectXMan12 commented 4 years ago

/kind feature /priority important-longterm

vincepri commented 3 years ago

/milestone Next

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

alkar commented 3 years ago

/remove-lifecycle rotten

orirawlings commented 3 years ago

Here is the work-around I implemented in my test helper functions. Basically, I use the discovery API to optimistically delete all namespaced resources in the namespace I'm trying to delete and then I update the /finalize subresource of the namespace to remove the kubernetes finalizer. The idea here is that it is a (perhaps poor) approximation of what kube-controller-manager would normally do to finalize a terminating namespace.

I brought in k8s.io/client-go in order to access the /finalize subresource and to interact with the discovery API (though I suspect the latter is possible with the standard controller-runtime client)

import (
    "context"
    "strings"

    . "github.com/onsi/gomega"

    corev1 "k8s.io/api/core/v1"
    apierrors "k8s.io/apimachinery/pkg/api/errors"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
    "k8s.io/apimachinery/pkg/runtime"
    "k8s.io/apimachinery/pkg/runtime/schema"
    "k8s.io/client-go/kubernetes"
    "sigs.k8s.io/controller-runtime/pkg/client"
)

func deleteAll(objs ...runtime.Object) {
    ctx := context.Background()
    clientGo, err := kubernetes.NewForConfig(testEnv.Config)
    Expect(err).ShouldNot(HaveOccurred())
    for _, obj := range objs {
        Expect(client.IgnoreNotFound(k8sClient.Delete(ctx, obj))).Should(Succeed())

        if ns, ok := obj.(*corev1.Namespace); ok {
            // Normally the kube-controller-manager would handle finalization
            // and garbage collection of namespaces, but with envtest, we aren't
            // running a kube-controller-manager. Instead we're gonna approximate
            // (poorly) the kube-controller-manager by explicitly deleting some
            // resources within the namespace and then removing the `kubernetes`
            // finalizer from the namespace resource so it can finish deleting.
            // Note that any resources within the namespace that we don't
            // successfully delete could reappear if the namespace is ever
            // recreated with the same name.

            // Look up all namespaced resources under the discovery API
            _, apiResources, err := clientGo.Discovery().ServerGroupsAndResources()
            Expect(err).ShouldNot(HaveOccurred())
            namespacedGVKs := make(map[string]schema.GroupVersionKind)
            for _, apiResourceList := range apiResources {
                defaultGV, err := schema.ParseGroupVersion(apiResourceList.GroupVersion)
                Expect(err).ShouldNot(HaveOccurred())
                for _, r := range apiResourceList.APIResources {
                    if !r.Namespaced || strings.Contains(r.Name, "/") {
                        // skip non-namespaced and subresources
                        continue
                    }
                    gvk := schema.GroupVersionKind{
                        Group:   defaultGV.Group,
                        Version: defaultGV.Version,
                        Kind:    r.Kind,
                    }
                    if r.Group != "" {
                        gvk.Group = r.Group
                    }
                    if r.Version != "" {
                        gvk.Version = r.Version
                    }
                    namespacedGVKs[gvk.String()] = gvk
                }
            }

            // Delete all namespaced resources in this namespace
            for _, gvk := range namespacedGVKs {
                var u unstructured.Unstructured
                u.SetGroupVersionKind(gvk)
                err := k8sClient.DeleteAllOf(ctx, &u, client.InNamespace(ns.Name))
                Expect(client.IgnoreNotFound(ignoreMethodNotAllowed(err))).ShouldNot(HaveOccurred())
            }

            Eventually(func() error {
                key, err := client.ObjectKeyFromObject(ns)
                if err != nil {
                    return err
                }
                if err := k8sClient.Get(ctx, key, ns); err != nil {
                    return client.IgnoreNotFound(err)
                }
                // remove `kubernetes` finalizer
                const kubernetes = "kubernetes"
                finalizers := []corev1.FinalizerName{}
                for _, f := range ns.Spec.Finalizers {
                    if f != kubernetes {
                        finalizers = append(finalizers, f)
                    }
                }
                ns.Spec.Finalizers = finalizers

                // We have to use the k8s.io/client-go library here to expose
                // ability to patch the /finalize subresource on the namespace
                _, err = clientGo.CoreV1().Namespaces().Finalize(ns)
                return err
            }, timeout, interval).Should(Succeed())
        }

        Eventually(func() metav1.StatusReason {
            key, _ := client.ObjectKeyFromObject(obj)
            if err := k8sClient.Get(ctx, key, obj); err != nil {
                return apierrors.ReasonForError(err)
            }
            return ""
        }, timeout, interval).Should(Equal(metav1.StatusReasonNotFound))
    }
}

func ignoreMethodNotAllowed(err error) error {
    if err != nil {
        if apierrors.ReasonForError(err) == metav1.StatusReasonMethodNotAllowed {
            return nil
        }
    }
    return err
}
coderanger commented 3 years ago

If this is a priority for your testing setup, I would strongly recommend using Kind or K3s as your test control plane rather than the default one launched by envtest. That will provide a more complete setup, including kube-controller-manager and the related namespace and gc controllers.

orirawlings commented 3 years ago

If this is a priority for your testing setup, I would strongly recommend using Kind or K3s as your test control plane rather than the default one launched by envtest. That will provide a more complete setup, including kube-controller-manager and the related namespace and gc controllers.

I was having a little trouble tracking down documentation for launching kind or k3s clusters from envtest. Is there example documentation anywhere?

Based on what I'm gleaning from https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest is it correct to assume that one would exec kind prior to starting envtest, and then just ensure envtest is passed the correct rest.Config and UseExistingCluster = true?

coderanger commented 3 years ago

Based on what I'm gleaning from https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest is it correct to assume that one would exec kind prior to starting envtest, and then just ensure envtest is passed the correct rest.Config and UseExistingCluster = true?

Yep, that. It will use your default kubeconfig so set whatever context you need before starting and then enable UseExistingCluster.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

coderanger commented 3 years ago

I think we can close this ticket, as running kube-controller-manager from envtest is out of scope and better served by UseExistingCluster as mentioned above (in combination with tools like Kind or K3s, or other minimalist Kubernetes environments).

/close

k8s-ci-robot commented 3 years ago

@coderanger: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/controller-runtime/issues/880#issuecomment-805271529): >I think we can close this ticket, as running kube-controller-manager from envtest is out of scope and better served by UseExistingCluster as mentioned above (in combination with tools like Kind or K3s, or other minimalist Kubernetes environments). > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
camilamacedo86 commented 1 year ago

I am re-open this one since it is not sorted out and was defined as a priority/important. Also, adding frozen label to not let the bot close it until it is addressed.

wallrj commented 1 year ago

The documentation moved:

Expected behavior https://book.kubebuilder.io/reference/testing/envtest.html?highlight=envtest#testing-considerations

https://book.kubebuilder.io/reference/envtest.html#namespace-usage-limitation

detro commented 1 month ago

Out of curiosity: is there anywhere explanation of the why of this behaviour of envtest? I couldn't find it in https://book.kubebuilder.io/reference/envtest.html#namespace-usage-limitation. It seems odd, but I'm a fan of knowing the why behind things :)

sbueringer commented 1 month ago

Wild guess, because we don't have a kube-controller-manager in envtest (which also causes a bunch of other limitations, e.g. object garbage collection)