kcp-dev / controller-runtime

Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)
Apache License 2.0
1 stars 15 forks source link

Provide instructions / examples for writing tests with kcp-dev/controller-runtime & envtest #15

Closed johnmcollier closed 4 months ago

johnmcollier commented 2 years ago

I've been taking a poke at running the Application Service Operator's tests against KCP, while using the kcp-dev/controller-runtime library, but have run into some issues.

Since the mock cluster stood up by envtest isn't KCP, I've been using the USE_EXISTING_CLUSTER toggle to use my existing kubeconfig (where my kubeconfig points to a KCP workspace). However, the tests as-is fail, since they were not written to be multi-workspace aware:

Application controller
/Users/johncollier/kcp/application-service/controllers/application_controller_test.go:41
  Create Application with no repositories set
  /Users/johncollier/kcp/application-service/controllers/application_controller_test.go:51
    Should create successfully with generated repositories [It]
    /Users/johncollier/kcp/application-service/controllers/application_controller_test.go:52

    Timed out after 10.001s.
    Expected
        <bool>: false
    to be true

    /Users/johncollier/kcp/application-service/controllers/application_controller_test.go:81

It seems that I need to make some modifications to suite_test.go and each individual controller's tests, similar to the modifications required for main.go and each controller.go file when using kcp-dev/controller-runtime library, but I haven't been able to get things to pass.

It'd be helpful if there were some instructions or an example (using https://github.com/kcp-dev/controller-runtime-example) for tests.

Maybe also if envtest could mock KCP, so that we don't need to point to an existing KCP instance for the tests (though I acknowledge that this likely isn't feasible).

mukundmckinsey commented 1 year ago

I was able to make it run on a kind cluster as below:

var _ = BeforeSuite(func() {
    logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true)))

    ctx, cancel = context.WithCancel(context.TODO())
        // Get the type of test_cluster exported as env 
    testCluster = os.ExpandEnv("${TEST_CLUSTER}")

    if testCluster == "kind" {
        fmt.Println("Running test on kind cluster.")
        var err error
        cfg := ctrl.GetConfigOrDie()
        k8sClient, err = client.New(cfg, client.Options{Scheme: scheme.Scheme})
        Expect(err).NotTo(HaveOccurred())
        Expect(k8sClient).NotTo(BeNil())

        err = namespacev1.AddToScheme(scheme.Scheme)
        Expect(err).NotTo(HaveOccurred())
        err = admissionv1beta1.AddToScheme(scheme.Scheme)
        Expect(err).NotTo(HaveOccurred())
        err = apiextensions.AddToScheme(scheme.Scheme)
        Expect(err).NotTo(HaveOccurred())
    } else if testCluster == "envTest" {
        By("bootstrapping test environment")
        testEnv = &envtest.Environment{
            CRDDirectoryPaths:     []string{filepath.Join("..", "..", "config", "crd", "bases"), filepath.Join("..", "..", "config", "argocd")},
            ErrorIfCRDPathMissing: true,
            WebhookInstallOptions: envtest.WebhookInstallOptions{
                Paths: []string{filepath.Join("..", "..", "config", "webhook")},
            },
        }

        cfg, err := testEnv.Start()
        Expect(err).NotTo(HaveOccurred())
        Expect(cfg).NotTo(BeNil())

        err = namespacev1.AddToScheme(scheme.Scheme)
        Expect(err).NotTo(HaveOccurred())
        err = admissionv1beta1.AddToScheme(scheme.Scheme)
        Expect(err).NotTo(HaveOccurred())
        err = apiextensions.AddToScheme(scheme.Scheme)
        Expect(err).NotTo(HaveOccurred())
        err = namespacev1.AddToScheme(scheme.Scheme)
        Expect(err).NotTo(HaveOccurred())

        //+kubebuilder:scaffold:scheme

        k8sClient, err = client.New(cfg, client.Options{Scheme: scheme.Scheme})
        Expect(err).NotTo(HaveOccurred())
        Expect(k8sClient).NotTo(BeNil())
        webhookInstallOptions := &testEnv.WebhookInstallOptions
        mgr, err := ctrl.NewManager(cfg, ctrl.Options{
            Scheme:             scheme.Scheme,
            Host:               webhookInstallOptions.LocalServingHost,
            Port:               webhookInstallOptions.LocalServingPort,
            CertDir:            webhookInstallOptions.LocalServingCertDir,
            LeaderElection:     false,
            MetricsBindAddress: "0",
        })

        Expect(err).ToNot(HaveOccurred())

        err = (&namespacev1.NamespaceDefinition{}).SetupWebhookWithManager(mgr)
        Expect(err).NotTo(HaveOccurred())

        err = (&NamespaceDefinitionReconciler{
            Client:        mgr.GetClient(),
            Scheme:        mgr.GetScheme(),
            AuthConfigMap: "aws-auth",
        }).SetupWithManager(mgr)
        Expect(err).ToNot(HaveOccurred())

        err = (&ObjectSyncReconciler{
            Client: mgr.GetClient(),
            Scheme: mgr.GetScheme(),
        }).SetupWithManager(mgr)
        Expect(err).ToNot(HaveOccurred())

        go func() {
            defer GinkgoRecover()
            err = mgr.Start(ctx)
            Expect(err).ToNot(HaveOccurred(), "failed to run manager")
        }()
        // wait for the webhook server to get ready
        dialer := &net.Dialer{Timeout: time.Second}
        addrPort := fmt.Sprintf("%s:%d", webhookInstallOptions.LocalServingHost, webhookInstallOptions.LocalServingPort)
        Eventually(func() error {
            conn, err := tls.DialWithDialer(dialer, "tcp", addrPort, &tls.Config{InsecureSkipVerify: true, MinVersion: tls.VersionTLS12})
            if err != nil {
                return err
            }
            conn.Close()
            return nil
        }).Should(Succeed())
    }
}, 60)

It runs successfully, but gingko does not reports the test coverage. It reports the coverage when we use envTest.

kcp-ci-bot commented 6 months ago

Issues go stale after 90d of inactivity. After a furter 30 days, they will turn rotten. Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kcp-ci-bot commented 5 months ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kcp-ci-bot commented 4 months ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kcp-ci-bot commented 4 months ago

@kcp-ci-bot: Closing this issue.

In response to [this](https://github.com/kcp-dev/controller-runtime/issues/15#issuecomment-2169209112): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.