kubernetes-sigs / controller-runtime

Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)
Apache License 2.0
2.38k stars 1.11k forks source link

Support server-side apply (client.Apply) in fake Client #2341

Open nathanperkins opened 1 year ago

nathanperkins commented 1 year ago

Recently, we've been writing extensive unit test coverage of our controllers using fake.Client but a few of our controllers use client.Apply, which is not supported by fake.Client.

~We can't migrate these controllers from client.Apply to client.Merge because with upgrades, any CRs managed by the previous version will have owned fields and the API server will reject the request (my understanding, is this true?)~ based on the SSA docs it seems the server would only reject requests based on ownership when using SSA, so using update would work.

Envtest takes ~5-15s to set up for each test case, meaning a test suite which would take 0.05s with fake.Client, takes 5 minutes with envtest. In many cases, we can isolate tests to namespaces and reuse the same client, but not always. We prefer our unit tests to be as isolated as possible.

Would it be possible to support server-side apply / client.Apply / SSA in the fake client?

alvaroaleman commented 1 year ago

The reason this isn't currently supported is because upstream client-go doesn't support this: https://github.com/kubernetes/kubernetes/issues/115598

We could build something downstream I suppose and the create case is going to be simple. I think the Update case is going to be pretty complicated though (if a field that is currently present but not in the submitted applyconfig, should we keep it or not is a non-trivial question to answer), so if possible I'd prefer to wait for upstream.

In the meantime when using todays 0.15 release of controller-runtime, you can use the interceptor to set up a createOrUpdate logic in Patch when an applypatch is submitted that works for your case.

vincepri commented 1 year ago

Envtest takes ~5-15s to set up for each test case

Hm, envtest is usually setup once per test package, is there a reason why envtest in this case once for every test case?

nathanperkins commented 1 year ago

Hm, envtest is usually setup once per test package, is there a reason why envtest in this case once for every test case?

I need to review our cases. We want to ensure that our test cases are isolated. Most of the time you can isolate them into unique namespaces but some of the controllers have specific requirements around that. I think we may be able to get around it by specifying which namespace to use as a field on the reconciler struct.

In the meantime when using todays 0.15 release of controller-runtime, you can use the interceptor to set up a createOrUpdate logic in Patch when an applypatch is submitted that works for your case.

Thank you for the response! I will look into this :)

The reason this isn't currently supported is because upstream client-go doesn't support this: https://github.com/kubernetes/kubernetes/issues/115598

Sounds reasonable to me. I mostly wanted to create an issue so that we have something to represent that it would make our lives a bit easier. If the work to benefit ratio doesn't make sense, that's fair.

Given that we can run envtest and isolate cases in namespaces, that is probably the way to go. It's a small bummer that we have to run our unit tests alongside an external dependency which takes some time to start up. We're going to reorganize things a bit and improve our scripts to make this easier for our developers.

nathanperkins commented 1 year ago

Hm, envtest is usually setup once per test package, is there a reason why envtest in this case once for every test case?

Found a case where it is a bit of a drag to use envtest. Any test which involves cluster scoped objects cannot be fully isolated in namespaces, leading to some issues:

alvaroaleman commented 1 year ago

@nathanperkins could we keep the discussion around if and how and when to use envtest separate? We are aware that this is lacking right now, but this is non-trivial which is the reason upstream hasn't done it. This essentially requires to have the serverside SSA logic in the fake client.

nathanperkins commented 1 year ago

@nathanperkins could we keep the discussion around if and how and when to use envtest separate?

Sure, it's totally understood this is not going to be resolved anytime soon.

I think that people searching for this issue will find it useful if there is clarity on what they can do in the meantime. The discussion on when / how to use envtest effectively instead of the fake client seems useful to that end. Maybe there is a doc or blog post that could be linked?

vincepri commented 1 year ago

Found a case where it is a bit of a drag to use envtest. Any test which involves cluster scoped objects cannot be fully isolated in namespaces

Do you have an example handy to show? Just to wrap my head around a bit more 😄

A few thoughts:

nathanperkins commented 1 year ago

Do you have an example handy to show? Just to wrap my head around a bit more 😄

I couldn't show the code without going through a bunch of approvals. I can tell you it's a controller which reconciles on corev1.Service and looks at corev1.Node status to create some internals CRs for network configuration. Our test is using envtest and we can't fully isolate the nodes between test cases without cleaning up the nodes.

Depending on how the reconcilers are structured, maybe there is a way that a filtered cache/client is passed into each reconciler's test case so it only has a partial view of the objects?

Great idea! I'll share this with my teammate and see if it works for our case.

We probably need better documentation / examples on how to use and re-use envtest appropriately.

I agree, sharing some of these patterns would really help. Last I looked at the kubebuilder docs, they have the example which uses ginkgo and gomega and relies on the manager to run reconciliation. I've been finding that it's easier to write exhaustive and accurate test coverage using more traditional table driven tests which call reconcile directly. We still write integration tests with ginkgo and gomega which use the manager, but less exhaustive and focusing more on ensuring the event handlers work correctly.

I'd love to see more discussion in the community about this, whether in docs or blog posts :)

nathanperkins commented 1 year ago

@vincepri, I'm moving discussion of using envtest with isolated unittest cases to #2358

sbueringer commented 1 year ago

@nathanperkins we have some general guidance here: https://cluster-api.sigs.k8s.io/developer/testing.html (not sure in which issue we want to dig deeper into pro/con of fake client vs envtest)

jakobmoellerdev commented 11 months ago

The reason this isn't currently supported is because upstream client-go doesn't support this: kubernetes/kubernetes#115598

We could build something downstream I suppose and the create case is going to be simple. I think the Update case is going to be pretty complicated though (if a field that is currently present but not in the submitted applyconfig, should we keep it or not is a non-trivial question to answer), so if possible I'd prefer to wait for upstream.

In the meantime when using todays 0.15 release of controller-runtime, you can use the interceptor to set up a createOrUpdate logic in Patch when an applypatch is submitted that works for your case.

For anyone looking for a workaround until the fake Client supports client.Apply:


import (
        "k8s.io/apimachinery/pkg/types"
    "sigs.k8s.io/controller-runtime/pkg/client/fake"
        "sigs.k8s.io/controller-runtime/pkg/client"
)

fake.NewClientBuilder().WithScheme(scheme).WithObjects(objs...).WithInterceptorFuncs(interceptor.Funcs{Patch: func(ctx context.Context, clnt client.WithWatch, obj client.Object, patch client.Patch, opts ...client.PatchOption) error {
        // Apply patches are supposed to upsert, but fake client fails if the object doesn't exist,
        // if an apply patch occurs for an object that doesn't yet exist, create it.
        if patch.Type() != types.ApplyPatchType {
            return clnt.Patch(ctx, obj, patch, opts...)
        }
        check, ok := obj.DeepCopyObject().(client.Object)
        if !ok {
            return errors.New("could not check for object in fake client")
        }
        if err := clnt.Get(ctx, client.ObjectKeyFromObject(obj), check); k8serror.IsNotFound(err) {
            if err := clnt.Create(ctx, check); err != nil {
                return fmt.Errorf("could not inject object creation for fake: %w", err)
            }
        }
        return clnt.Patch(ctx, obj, patch, opts...)
    }}).Build()
troy0820 commented 10 months ago

/kind support feature @nathanperkins can we close this issue? I see you moved it to a different issue

alvaroaleman commented 10 months ago

I think this issue is valid and we should keep it open. Effectively it is tracking the upstream issue https://github.com/kubernetes/kubernetes/issues/115598, after that got resolved, we will get this as well.

Something like what @jakobmoellerdev suggested is an approximation only in that it makes SSA effectively a CreateOrPatch in the fakeclient, which is not the same as the field ownership tracking that is done in the actual SSA. So what happens in such a test might not be representative of what happens in reality.

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 3 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/controller-runtime/issues/2341#issuecomment-2022049054): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
pmalek commented 3 months ago

/remove-lifecycle rotten /reopen

k8s-ci-robot commented 3 months ago

@pmalek: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to [this](https://github.com/kubernetes-sigs/controller-runtime/issues/2341#issuecomment-2022357669): >/remove-lifecycle rotten >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
sbueringer commented 3 months ago

/reopen

k8s-ci-robot commented 3 months ago

@sbueringer: Reopened this issue.

In response to [this](https://github.com/kubernetes-sigs/controller-runtime/issues/2341#issuecomment-2024610929): > >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
k8s-triage-robot commented 3 weeks ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

alvaroaleman commented 3 weeks ago

Upstream fake clients started to support SSA in https://github.com/kubernetes/kubernetes/pull/125560, we should follow suit.