Open ekuefler opened 4 years ago
Whilst I appreciate this is expected behaviour, it would be handy to know what the advice is for separating tests from one another. @ekuefler have you found a way forward?
I too am trying to keep my tests independent in envtest. My operator queries all resources of a given kind, so I need to ensure its queries are not populated with resources from previous tests! :)
Edit: for future time travelling envtest writers, I eventually got what I wanted with the following approach (caveat emptor, I only just wrote this and could find issues, your mileage may vary :smile:):
var createdResources []runtime.Object
deleteResourceAfterTest := func(o runtime.Object) {
createdResources = append(createdResources, o)
}
BeforeEach(func() {
log.Info("resetting created resources list")
createdResources = nil
})
AfterEach(func() {
for i := len(createdResources) - 1; i >= 0; i-- {
r := createdResources[i]
key, err := client.ObjectKeyFromObject(r)
Expect(err).To(Succeed())
log.Info("deleting resource", "namespace", key.Namespace, "name", key.Name, "test", CurrentGinkgoTestDescription().FullTestText)
Expect(k8sClient.Delete(ctx, r)).To(Succeed())
_, isNamespace := r.(*corev1.Namespace)
if !isNamespace {
log.Info("waiting for resource to disappear", "namespace", key.Namespace, "name", key.Name, "test", CurrentGinkgoTestDescription().FullTestText)
Eventually(func() error {
return k8sClient.Get(ctx, key, r)
}, timeout, interval).Should(HaveOccurred())
log.Info("deleted resource", "namespace", key.Namespace, "name", key.Name, "test", CurrentGinkgoTestDescription().FullTestText)
}
}
})
/kind feature /priority important-longterm
/milestone Next
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Here is the work-around I implemented in my test helper functions. Basically, I use the discovery API to optimistically delete all namespaced resources in the namespace I'm trying to delete and then I update the /finalize
subresource of the namespace to remove the kubernetes
finalizer. The idea here is that it is a (perhaps poor) approximation of what kube-controller-manager
would normally do to finalize a terminating namespace.
I brought in k8s.io/client-go
in order to access the /finalize
subresource and to interact with the discovery API (though I suspect the latter is possible with the standard controller-runtime client)
import (
"context"
"strings"
. "github.com/onsi/gomega"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/kubernetes"
"sigs.k8s.io/controller-runtime/pkg/client"
)
func deleteAll(objs ...runtime.Object) {
ctx := context.Background()
clientGo, err := kubernetes.NewForConfig(testEnv.Config)
Expect(err).ShouldNot(HaveOccurred())
for _, obj := range objs {
Expect(client.IgnoreNotFound(k8sClient.Delete(ctx, obj))).Should(Succeed())
if ns, ok := obj.(*corev1.Namespace); ok {
// Normally the kube-controller-manager would handle finalization
// and garbage collection of namespaces, but with envtest, we aren't
// running a kube-controller-manager. Instead we're gonna approximate
// (poorly) the kube-controller-manager by explicitly deleting some
// resources within the namespace and then removing the `kubernetes`
// finalizer from the namespace resource so it can finish deleting.
// Note that any resources within the namespace that we don't
// successfully delete could reappear if the namespace is ever
// recreated with the same name.
// Look up all namespaced resources under the discovery API
_, apiResources, err := clientGo.Discovery().ServerGroupsAndResources()
Expect(err).ShouldNot(HaveOccurred())
namespacedGVKs := make(map[string]schema.GroupVersionKind)
for _, apiResourceList := range apiResources {
defaultGV, err := schema.ParseGroupVersion(apiResourceList.GroupVersion)
Expect(err).ShouldNot(HaveOccurred())
for _, r := range apiResourceList.APIResources {
if !r.Namespaced || strings.Contains(r.Name, "/") {
// skip non-namespaced and subresources
continue
}
gvk := schema.GroupVersionKind{
Group: defaultGV.Group,
Version: defaultGV.Version,
Kind: r.Kind,
}
if r.Group != "" {
gvk.Group = r.Group
}
if r.Version != "" {
gvk.Version = r.Version
}
namespacedGVKs[gvk.String()] = gvk
}
}
// Delete all namespaced resources in this namespace
for _, gvk := range namespacedGVKs {
var u unstructured.Unstructured
u.SetGroupVersionKind(gvk)
err := k8sClient.DeleteAllOf(ctx, &u, client.InNamespace(ns.Name))
Expect(client.IgnoreNotFound(ignoreMethodNotAllowed(err))).ShouldNot(HaveOccurred())
}
Eventually(func() error {
key, err := client.ObjectKeyFromObject(ns)
if err != nil {
return err
}
if err := k8sClient.Get(ctx, key, ns); err != nil {
return client.IgnoreNotFound(err)
}
// remove `kubernetes` finalizer
const kubernetes = "kubernetes"
finalizers := []corev1.FinalizerName{}
for _, f := range ns.Spec.Finalizers {
if f != kubernetes {
finalizers = append(finalizers, f)
}
}
ns.Spec.Finalizers = finalizers
// We have to use the k8s.io/client-go library here to expose
// ability to patch the /finalize subresource on the namespace
_, err = clientGo.CoreV1().Namespaces().Finalize(ns)
return err
}, timeout, interval).Should(Succeed())
}
Eventually(func() metav1.StatusReason {
key, _ := client.ObjectKeyFromObject(obj)
if err := k8sClient.Get(ctx, key, obj); err != nil {
return apierrors.ReasonForError(err)
}
return ""
}, timeout, interval).Should(Equal(metav1.StatusReasonNotFound))
}
}
func ignoreMethodNotAllowed(err error) error {
if err != nil {
if apierrors.ReasonForError(err) == metav1.StatusReasonMethodNotAllowed {
return nil
}
}
return err
}
If this is a priority for your testing setup, I would strongly recommend using Kind or K3s as your test control plane rather than the default one launched by envtest. That will provide a more complete setup, including kube-controller-manager and the related namespace and gc controllers.
If this is a priority for your testing setup, I would strongly recommend using Kind or K3s as your test control plane rather than the default one launched by envtest. That will provide a more complete setup, including kube-controller-manager and the related namespace and gc controllers.
I was having a little trouble tracking down documentation for launching kind or k3s clusters from envtest. Is there example documentation anywhere?
Based on what I'm gleaning from https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest is it correct to assume that one would exec kind
prior to starting envtest, and then just ensure envtest is passed the correct rest.Config
and UseExistingCluster = true
?
Based on what I'm gleaning from https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest is it correct to assume that one would exec
kind
prior to starting envtest, and then just ensure envtest is passed the correctrest.Config
andUseExistingCluster = true
?
Yep, that. It will use your default kubeconfig so set whatever context you need before starting and then enable UseExistingCluster.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
I think we can close this ticket, as running kube-controller-manager from envtest is out of scope and better served by UseExistingCluster as mentioned above (in combination with tools like Kind or K3s, or other minimalist Kubernetes environments).
/close
@coderanger: Closing this issue.
I am re-open this one since it is not sorted out and was defined as a priority/important. Also, adding frozen label to not let the bot close it until it is addressed.
The documentation moved:
Expected behavior https://book.kubebuilder.io/reference/testing/envtest.html?highlight=envtest#testing-considerations
https://book.kubebuilder.io/reference/envtest.html#namespace-usage-limitation
Out of curiosity: is there anywhere explanation of the why of this behaviour of envtest
?
I couldn't find it in https://book.kubebuilder.io/reference/envtest.html#namespace-usage-limitation.
It seems odd, but I'm a fan of knowing the why behind things :)
Wild guess, because we don't have a kube-controller-manager in envtest (which also causes a bunch of other limitations, e.g. object garbage collection)
I'd like to use namespaces to isolate my tests from one another. For example, I want to create a new namespace with a random name in my
BeforeEach
and delete it inAfterEach
so that individual tests don't have to worry about cleaning up after themselves.However, I've observed that namespace deletion doesn't work when using
envtest
. When I delete a namespace, the deletion timestamp on resources within that namespace remain unset, and the namespace stays in theTerminating
phase seemingly forever. The latter is true even if the namespace is empty (e.g. I delete it immediately after creating it).Is this behavior expected? Is whatever mechanism that normally executes namespace deletion missing from
envtest
?