Closed johnmcollier closed 4 months ago
I was able to make it run on a kind
cluster as below:
var _ = BeforeSuite(func() {
logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true)))
ctx, cancel = context.WithCancel(context.TODO())
// Get the type of test_cluster exported as env
testCluster = os.ExpandEnv("${TEST_CLUSTER}")
if testCluster == "kind" {
fmt.Println("Running test on kind cluster.")
var err error
cfg := ctrl.GetConfigOrDie()
k8sClient, err = client.New(cfg, client.Options{Scheme: scheme.Scheme})
Expect(err).NotTo(HaveOccurred())
Expect(k8sClient).NotTo(BeNil())
err = namespacev1.AddToScheme(scheme.Scheme)
Expect(err).NotTo(HaveOccurred())
err = admissionv1beta1.AddToScheme(scheme.Scheme)
Expect(err).NotTo(HaveOccurred())
err = apiextensions.AddToScheme(scheme.Scheme)
Expect(err).NotTo(HaveOccurred())
} else if testCluster == "envTest" {
By("bootstrapping test environment")
testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "..", "config", "crd", "bases"), filepath.Join("..", "..", "config", "argocd")},
ErrorIfCRDPathMissing: true,
WebhookInstallOptions: envtest.WebhookInstallOptions{
Paths: []string{filepath.Join("..", "..", "config", "webhook")},
},
}
cfg, err := testEnv.Start()
Expect(err).NotTo(HaveOccurred())
Expect(cfg).NotTo(BeNil())
err = namespacev1.AddToScheme(scheme.Scheme)
Expect(err).NotTo(HaveOccurred())
err = admissionv1beta1.AddToScheme(scheme.Scheme)
Expect(err).NotTo(HaveOccurred())
err = apiextensions.AddToScheme(scheme.Scheme)
Expect(err).NotTo(HaveOccurred())
err = namespacev1.AddToScheme(scheme.Scheme)
Expect(err).NotTo(HaveOccurred())
//+kubebuilder:scaffold:scheme
k8sClient, err = client.New(cfg, client.Options{Scheme: scheme.Scheme})
Expect(err).NotTo(HaveOccurred())
Expect(k8sClient).NotTo(BeNil())
webhookInstallOptions := &testEnv.WebhookInstallOptions
mgr, err := ctrl.NewManager(cfg, ctrl.Options{
Scheme: scheme.Scheme,
Host: webhookInstallOptions.LocalServingHost,
Port: webhookInstallOptions.LocalServingPort,
CertDir: webhookInstallOptions.LocalServingCertDir,
LeaderElection: false,
MetricsBindAddress: "0",
})
Expect(err).ToNot(HaveOccurred())
err = (&namespacev1.NamespaceDefinition{}).SetupWebhookWithManager(mgr)
Expect(err).NotTo(HaveOccurred())
err = (&NamespaceDefinitionReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
AuthConfigMap: "aws-auth",
}).SetupWithManager(mgr)
Expect(err).ToNot(HaveOccurred())
err = (&ObjectSyncReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr)
Expect(err).ToNot(HaveOccurred())
go func() {
defer GinkgoRecover()
err = mgr.Start(ctx)
Expect(err).ToNot(HaveOccurred(), "failed to run manager")
}()
// wait for the webhook server to get ready
dialer := &net.Dialer{Timeout: time.Second}
addrPort := fmt.Sprintf("%s:%d", webhookInstallOptions.LocalServingHost, webhookInstallOptions.LocalServingPort)
Eventually(func() error {
conn, err := tls.DialWithDialer(dialer, "tcp", addrPort, &tls.Config{InsecureSkipVerify: true, MinVersion: tls.VersionTLS12})
if err != nil {
return err
}
conn.Close()
return nil
}).Should(Succeed())
}
}, 60)
It runs successfully, but gingko does not reports the test coverage. It reports the coverage when we use envTest.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kcp-ci-bot: Closing this issue.
I've been taking a poke at running the Application Service Operator's tests against KCP, while using the kcp-dev/controller-runtime library, but have run into some issues.
Since the mock cluster stood up by envtest isn't KCP, I've been using the
USE_EXISTING_CLUSTER
toggle to use my existing kubeconfig (where my kubeconfig points to a KCP workspace). However, the tests as-is fail, since they were not written to be multi-workspace aware:It seems that I need to make some modifications to suite_test.go and each individual controller's tests, similar to the modifications required for main.go and each controller.go file when using kcp-dev/controller-runtime library, but I haven't been able to get things to pass.
It'd be helpful if there were some instructions or an example (using https://github.com/kcp-dev/controller-runtime-example) for tests.
Maybe also if
envtest
could mock KCP, so that we don't need to point to an existing KCP instance for the tests (though I acknowledge that this likely isn't feasible).