operator-framework / operator-sdk

SDK for building Kubernetes applications. Provides high level APIs, useful abstractions, and project scaffolding.
https://sdk.operatorframework.io
Apache License 2.0
7.25k stars 1.75k forks source link

Unable to run the tests locally (Mac) after the changes made on the Makefile #4151

Closed camilamacedo86 closed 3 years ago

camilamacedo86 commented 4 years ago

Bug Report

What did you do?

I executed make test-e2e-go, make test-e2e-helm, make test-e2e-ansible,

What did you expect to see?

The tests are executed successfully.

What did you see instead? Under which circumstances?

The operator-sdk run packagemanifests --install-mode AllNamespaces --version 0.0.1 --timeout 4m faces a timeout now and for ansible, I also have been check a timeou to get the expected result Ansible-runner exited successfully from the reconcilation:

    should run correctly in a cluster [It]
    /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_cluster_test.go:84

    Timed out after 60.005s.
    Expected
        <string>: {"level":"info","ts":1604183822.8870246,"logger":"cmd","msg":"Version","Go Version":"go1.15.2","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.1.0+git","commit":"cc409fb3b0b8785a794e503e2d2238fd950652ce"}
        {"level":"info","ts":1604183822.918809,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
        {"level":"info","ts":1604183824.1398323,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
        {"level":"info","ts":1604183824.2127461,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_MEMCACHED_ANSIBLE_EXAMPLE_COM","default":2}
        {"level":"info","ts":1604183824.214882,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_MEMFIN_ANSIBLE_EXAMPLE_COM","default":2}
        {"level":"info","ts":1604183824.2149458,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_FOO_ANSIBLE_EXAMPLE_COM","default":2}
        {"level":"info","ts":1604183824.2178037,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false}
        {"level":"info","ts":1604183824.2178552,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"ansible.example.com","Options.Version":"v1alpha1","Options.Kind":"Memcached"}
        {"level":"info","ts":1604183824.2266393,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false}
        {"level":"info","ts":1604183824.226767,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"ansible.example.com","Options.Version":"v1alpha1","Options.Kind":"Memfin"}
        {"level":"info","ts":1604183824.2268672,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false}
        {"level":"info","ts":1604183824.2268865,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"ansible.example.com","Options.Version":"v1alpha1","Options.Kind":"Foo"}
        {"level":"info","ts":1604183824.2399654,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"}
        {"level":"info","ts":1604183824.2431545,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
        I1031 22:37:04.265594       7 leaderelection.go:242] attempting to acquire leader lease  e2e-gktc-system/e2e-gktc...

    to contain substring
        <string>: Ansible-runner exited successfully

See the full logs:

`make test-e2e-go` ```$ make test camilamacedo@Camilas-MacBook-Pro ~/go/src/github.com/operator-framework/operator-sdk (master) $ make test-e2e-go go build -gcflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -ldflags " -X 'github.com/operator-framework/operator-sdk/internal/version.Version=v1.1.0+git' -X 'github.com/operator-framework/operator-sdk/internal/version.GitVersion=v1.1.0-7-gcc409fb3' -X 'github.com/operator-framework/operator-sdk/internal/version.GitCommit=cc409fb3b0b8785a794e503e2d2238fd950652ce' -X 'github.com/operator-framework/operator-sdk/internal/version.KubernetesVersion=v1.18.8' " -o build ./cmd/{operator-sdk,ansible-operator,helm-operator} tools/scripts/fetch kind 0.9.0 tools/scripts/fetch envtest 0.6.3 tools/scripts/fetch kubectl 1.18.8 # Install kubectl AFTER envtest because envtest includes its own kubectl binary [[ "`tools/bin/kind get clusters`" =~ "operator-sdk-e2e" ]] || tools/bin/kind create cluster --image="kindest/node:v1.18.8" --name operator-sdk-e2e go build -gcflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -o build/_image/scorecard-test ./images/scorecard-test mkdir -p ./images/scorecard-test/bin && mv build/_image/scorecard-test ./images/scorecard-test/bin docker build -t quay.io/operator-framework/scorecard-test:dev -f ./images/scorecard-test/Dockerfile ./images/scorecard-test Sending build context to Docker daemon 45.91MB Step 1/8 : FROM registry.access.redhat.com/ubi8/ubi-minimal:latest ---> 28095021e526 Step 2/8 : ENV HOME=/opt/scorecard-test USER_NAME=scorecard-test USER_UID=1001 ---> Using cache ---> 05c0cf94e89d Step 3/8 : RUN echo "${USER_NAME}:x:${USER_UID}:0:${USER_NAME} user:${HOME}:/sbin/nologin" >> /etc/passwd ---> Using cache ---> def95b3859af Step 4/8 : WORKDIR ${HOME} ---> Using cache ---> 8667c6ce0750 Step 5/8 : ARG BIN=bin/scorecard-test ---> Using cache ---> a8bd4c5f05a7 Step 6/8 : COPY $BIN /usr/local/bin/scorecard-test ---> Using cache ---> 067257a8e189 Step 7/8 : ENTRYPOINT ["/usr/local/bin/scorecard-test"] ---> Using cache ---> c7b2662b4921 Step 8/8 : USER ${USER_UID} ---> Using cache ---> 4227e78b2a78 Successfully built 4227e78b2a78 Successfully tagged quay.io/operator-framework/scorecard-test:dev rm -rf build/_image go build -gcflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -o build/_image/custom-scorecard-tests ./images/custom-scorecard-tests mkdir -p ./images/custom-scorecard-tests/bin && mv build/_image/custom-scorecard-tests ./images/custom-scorecard-tests/bin docker build -t quay.io/operator-framework/custom-scorecard-tests:dev -f ./images/custom-scorecard-tests/Dockerfile ./images/custom-scorecard-tests Sending build context to Docker daemon 26.64MB Step 1/8 : FROM registry.access.redhat.com/ubi8/ubi-minimal:latest ---> 28095021e526 Step 2/8 : ENV HOME=/opt/custom-scorecard-tests USER_NAME=custom-scorecard-tests USER_UID=1001 ---> Using cache ---> 70801d67bcf1 Step 3/8 : RUN echo "${USER_NAME}:x:${USER_UID}:0:${USER_NAME} user:${HOME}:/sbin/nologin" >> /etc/passwd ---> Using cache ---> 44ae8d06988f Step 4/8 : WORKDIR ${HOME} ---> Using cache ---> 4a62955cf8c7 Step 5/8 : ARG BIN=bin/custom-scorecard-tests ---> Using cache ---> a8371dd977e7 Step 6/8 : COPY $BIN /usr/local/bin/custom-scorecard-tests ---> Using cache ---> 5705d42b7cfd Step 7/8 : ENTRYPOINT ["/usr/local/bin/custom-scorecard-tests"] ---> Using cache ---> a0ec58663305 Step 8/8 : USER ${USER_UID} ---> Using cache ---> b0735ffb6c05 Successfully built b0735ffb6c05 Successfully tagged quay.io/operator-framework/custom-scorecard-tests:dev rm -rf build/_image go test ./test/e2e-go -v -ginkgo.v === RUN TestE2EGo Running Suite: E2EGo Suite ========================== Random Seed: 1604181632 Will run 4 of 4 specs STEP: creating a new test context STEP: creating a new directory preparing testing directory: /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e-yect STEP: fetching the current-context running: kubectl config current-context STEP: preparing the prerequisites on cluster STEP: checking API resources applied on Cluster running: kubectl api-resources STEP: initializing a project running: operator-sdk init --project-version 3-alpha --repo github.com/example/e2e-yect --domain example.comyect --fetch-deps=false STEP: by adding scorecard custom patch file STEP: using dev image for scorecard-test STEP: creating an API definition running: operator-sdk create api --group baryect --version v1alpha1 --kind Fooyect --namespaced --resource --controller --make=false STEP: implementing the API STEP: enabling Prometheus via the kustomization.yaml STEP: turning off interactive prompts for all generation tasks. STEP: checking the kustomize setup running: make kustomize STEP: building the project image running: make docker-build IMG=quay.io/example/e2e-yect:v0.0.1 STEP: loading the required images into Kind cluster running: kind load docker-image quay.io/example/e2e-yect:v0.0.1 --name operator-sdk-e2e running: kind load docker-image --name operator-sdk-e2e quay.io/operator-framework/scorecard-test:dev running: kind load docker-image --name operator-sdk-e2e quay.io/operator-framework/custom-scorecard-tests:dev STEP: generating the operator bundle running: make bundle IMG=quay.io/example/e2e-yect:v0.0.1 Integrating Go Projects with OLM with operator-sdk should generate and run a valid OLM bundle and packagemanifests /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_olm_test.go:28 STEP: turning off interactive prompts for all generation tasks. STEP: building the bundle running: make bundle IMG=quay.io/example/e2e-yect:v0.0.1 STEP: building the operator bundle image running: make bundle-build BUNDLE_IMG=quay.io/example/e2e-yect-bundle:v0.0.1 STEP: loading the bundle image into Kind cluster running: kind load docker-image --name operator-sdk-e2e quay.io/example/e2e-yect-bundle:v0.0.1 STEP: adding the 'packagemanifests' rule to the Makefile STEP: generating the operator package manifests running: make packagemanifests IMG=quay.io/example/e2e-yect:v0.0.1 STEP: running the package manifests-formatted operator running: operator-sdk run packagemanifests --install-mode AllNamespaces --version 0.0.1 --timeout 4m panic: test timed out after 10m0s goroutine 14 [running]: testing.(*M).startAlarm.func1() /usr/local/go/src/testing/testing.go:1628 +0xe5 created by time.goFunc /usr/local/go/src/time/sleep.go:167 +0x45 goroutine 1 [chan receive]: testing.(*T).Run(0xc000001b00, 0x16ab722, 0x9, 0x16df380, 0x1092a66) /usr/local/go/src/testing/testing.go:1179 +0x3ad testing.runTests.func1(0xc000001980) /usr/local/go/src/testing/testing.go:1449 +0x78 testing.tRunner(0xc000001980, 0xc00019dde0) /usr/local/go/src/testing/testing.go:1127 +0xef testing.runTests(0xc0001f07a0, 0x1b5f350, 0x1, 0x1, 0xbfdf96162b07fdc0, 0x8bb2f787bf, 0x1b84420, 0x100d6f0) /usr/local/go/src/testing/testing.go:1447 +0x2e8 testing.(*M).Run(0xc00017ea80, 0x0) /usr/local/go/src/testing/testing.go:1357 +0x245 main.main() _testmain.go:43 +0x138 goroutine 6 [chan receive]: k8s.io/klog.(*loggingT).flushDaemon(0x1b84720) /Users/camilamacedo/go/pkg/mod/k8s.io/klog@v1.0.0/klog.go:1010 +0x8b created by k8s.io/klog.init.0 /Users/camilamacedo/go/pkg/mod/k8s.io/klog@v1.0.0/klog.go:411 +0xd8 goroutine 7 [syscall]: syscall.syscall6(0x1085000, 0x34f6, 0xc000199054, 0x0, 0xc000290ab0, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/runtime/sys_darwin.go:85 +0x2e syscall.wait4(0x34f6, 0xc000199054, 0x0, 0xc000290ab0, 0x90, 0x168da60, 0xc00029aa01) /usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x87 syscall.Wait4(0x34f6, 0xc0001990a4, 0x0, 0xc000290ab0, 0x0, 0x1, 0xc000199138) /usr/local/go/src/syscall/syscall_bsd.go:129 +0x51 os.(*Process).wait(0xc00029aa20, 0x16dfb40, 0x16dfb48, 0x16dfb38) /usr/local/go/src/os/exec_unix.go:43 +0x85 os.(*Process).Wait(...) /usr/local/go/src/os/exec.go:125 os/exec.(*Cmd).Wait(0xc0002ccb00, 0x0, 0x0) /usr/local/go/src/os/exec/exec.go:507 +0x65 os/exec.(*Cmd).Run(0xc0002ccb00, 0xc000287350, 0xc00028e000) /usr/local/go/src/os/exec/exec.go:341 +0x5c os/exec.(*Cmd).CombinedOutput(0xc0002ccb00, 0xc00001e980, 0x16ae181, 0xc, 0xc000199248, 0x1) /usr/local/go/src/os/exec/exec.go:567 +0x91 sigs.k8s.io/kubebuilder/test/e2e/utils.(*CmdContext).Run(0xc0000a8000, 0xc0002ccb00, 0xc0001993e0, 0x8, 0x8, 0xc0002ccb00, 0x1) /Users/camilamacedo/go/pkg/mod/sigs.k8s.io/kubebuilder@v1.0.9-0.20201021204649-36124ae2e027/test/e2e/utils/test_context.go:222 +0x225 github.com/operator-framework/operator-sdk/test/e2e-go_test.glob..func3.1.1() /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_olm_test.go:61 +0xa6b github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0000336e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/leafnodes/runner.go:113 +0xa3 github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0000336e0, 0xc000001c80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/leafnodes/runner.go:64 +0xd7 github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc0001f0460, 0x17607e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/leafnodes/it_node.go:26 +0x67 github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc000216000, 0x0, 0x17607e0, 0xc00001e980) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/spec/spec.go:215 +0x691 github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc000216000, 0x17607e0, 0xc00001e980) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/spec/spec.go:138 +0xf2 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0001e3540, 0xc000216000, 0x0) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/specrunner/spec_runner.go:200 +0x111 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0001e3540, 0x1) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/specrunner/spec_runner.go:170 +0x127 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0001e3540, 0xc000027cc8) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/specrunner/spec_runner.go:66 +0x117 github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000076280, 0x4066f00, 0xc000001b00, 0x16acdb5, 0xb, 0xc0001c4ab0, 0x1, 0x1, 0x1770220, 0xc00001e980, ...) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/suite/suite.go:62 +0x426 github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x1760f20, 0xc000001b00, 0x16acdb5, 0xb, 0xc000058f18, 0x1, 0x1, 0x100e058) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/ginkgo_dsl.go:226 +0x238 github.com/onsi/ginkgo.RunSpecs(0x1760f20, 0xc000001b00, 0x16acdb5, 0xb, 0x5f9dde80) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/ginkgo_dsl.go:207 +0x168 github.com/operator-framework/operator-sdk/test/e2e-go_test.TestE2EGo(0xc000001b00) /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_suite_test.go:37 +0xc9 testing.tRunner(0xc000001b00, 0x16df380) /usr/local/go/src/testing/testing.go:1127 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1178 +0x386 goroutine 8 [chan receive]: github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).registerForInterrupts(0xc0001e3540, 0xc000030de0) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/specrunner/spec_runner.go:223 +0xce created by github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/specrunner/spec_runner.go:60 +0x86 goroutine 10 [syscall]: os/signal.signal_recv(0x0) /usr/local/go/src/runtime/sigqueue.go:144 +0x9d os/signal.loop() /usr/local/go/src/os/signal/signal_unix.go:23 +0x25 created by os/signal.Notify.func1.1 /usr/local/go/src/os/signal/signal.go:150 +0x45 goroutine 39 [IO wait]: internal/poll.runtime_pollWait(0x4069668, 0x72, 0x17619e0) /usr/local/go/src/runtime/netpoll.go:220 +0x55 internal/poll.(*pollDesc).wait(0xc000281338, 0x72, 0xc000242501, 0x8f4, 0x8f4) /usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45 internal/poll.(*pollDesc).waitRead(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:92 internal/poll.(*FD).Read(0xc000281320, 0xc00024250c, 0x8f4, 0x8f4, 0x0, 0x0, 0x0) /usr/local/go/src/internal/poll/fd_unix.go:159 +0x1b1 os.(*File).read(...) /usr/local/go/src/os/file_posix.go:31 os.(*File).Read(0xc000294218, 0xc00024250c, 0x8f4, 0x8f4, 0x5c, 0x0, 0x0) /usr/local/go/src/os/file.go:116 +0x71 bytes.(*Buffer).ReadFrom(0xc000287350, 0x1760ce0, 0xc000294218, 0x4084028, 0xc000287350, 0x1) /usr/local/go/src/bytes/buffer.go:204 +0xb1 io.copyBuffer(0x175fde0, 0xc000287350, 0x1760ce0, 0xc000294218, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/io/io.go:395 +0x2ff io.Copy(...) /usr/local/go/src/io/io.go:368 os/exec.(*Cmd).writerDescriptor.func1(0x0, 0x0) /usr/local/go/src/os/exec/exec.go:311 +0x65 os/exec.(*Cmd).Start.func1(0xc0002ccb00, 0xc000284b20) /usr/local/go/src/os/exec/exec.go:441 +0x27 created by os/exec.(*Cmd).Start /usr/local/go/src/os/exec/exec.go:440 +0x629 FAIL github.com/operator-framework/operator-sdk/test/e2e-go 600.135s FAIL make: *** [Makefile:140: test-e2e-go] Error 1 ```
`make test-e2e-helm` $ make test-e2e-helm go build -gcflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -ldflags " -X 'github.com/operator-framework/operator-sdk/internal/version.Version=v1.1.0+git' -X 'github.com/operator-framework/operator-sdk/internal/version.GitVersion=v1.1.0-7-gcc409fb3' -X 'github.com/operator-framework/operator-sdk/internal/version.GitCommit=cc409fb3b0b8785a794e503e2d2238fd950652ce' -X 'github.com/operator-framework/operator-sdk/internal/version.KubernetesVersion=v1.18.8' " -o build ./cmd/{operator-sdk,ansible-operator,helm-operator} tools/scripts/fetch kind 0.9.0 tools/scripts/fetch envtest 0.6.3 tools/scripts/fetch kubectl 1.18.8 # Install kubectl AFTER envtest because envtest includes its own kubectl binary [[ "`tools/bin/kind get clusters`" =~ "operator-sdk-e2e" ]] || tools/bin/kind create cluster --image="kindest/node:v1.18.8" --name operator-sdk-e2e go build -gcflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -o build/_image/scorecard-test ./images/scorecard-test mkdir -p ./images/scorecard-test/bin && mv build/_image/scorecard-test ./images/scorecard-test/bin docker build -t quay.io/operator-framework/scorecard-test:dev -f ./images/scorecard-test/Dockerfile ./images/scorecard-test Sending build context to Docker daemon 45.91MB Step 1/8 : FROM registry.access.redhat.com/ubi8/ubi-minimal:latest ---> 28095021e526 Step 2/8 : ENV HOME=/opt/scorecard-test USER_NAME=scorecard-test USER_UID=1001 ---> Using cache ---> 05c0cf94e89d Step 3/8 : RUN echo "${USER_NAME}:x:${USER_UID}:0:${USER_NAME} user:${HOME}:/sbin/nologin" >> /etc/passwd ---> Using cache ---> def95b3859af Step 4/8 : WORKDIR ${HOME} ---> Using cache ---> 8667c6ce0750 Step 5/8 : ARG BIN=bin/scorecard-test ---> Using cache ---> a8bd4c5f05a7 Step 6/8 : COPY $BIN /usr/local/bin/scorecard-test ---> Using cache ---> 067257a8e189 Step 7/8 : ENTRYPOINT ["/usr/local/bin/scorecard-test"] ---> Using cache ---> c7b2662b4921 Step 8/8 : USER ${USER_UID} ---> Using cache ---> 4227e78b2a78 Successfully built 4227e78b2a78 Successfully tagged quay.io/operator-framework/scorecard-test:dev rm -rf build/_image go build -gcflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -ldflags " -X 'github.com/operator-framework/operator-sdk/internal/version.Version=v1.1.0+git' -X 'github.com/operator-framework/operator-sdk/internal/version.GitVersion=v1.1.0-7-gcc409fb3' -X 'github.com/operator-framework/operator-sdk/internal/version.GitCommit=cc409fb3b0b8785a794e503e2d2238fd950652ce' -X 'github.com/operator-framework/operator-sdk/internal/version.KubernetesVersion=v1.18.8' " -o build/_image/helm-operator ./cmd/helm-operator mkdir -p ./images/helm-operator/bin && mv build/_image/helm-operator ./images/helm-operator/bin docker build -t quay.io/operator-framework/helm-operator:dev -f ./images/helm-operator/Dockerfile ./images/helm-operator Sending build context to Docker daemon 55.89MB Step 1/8 : FROM registry.access.redhat.com/ubi8/ubi-minimal:latest ---> 28095021e526 Step 2/8 : ENV HOME=/opt/helm USER_NAME=helm USER_UID=1001 ---> Using cache ---> 6215849821fc Step 3/8 : RUN echo "${USER_NAME}:x:${USER_UID}:0:${USER_NAME} user:${HOME}:/sbin/nologin" >> /etc/passwd ---> Using cache ---> 36d5ff816e26 Step 4/8 : WORKDIR ${HOME} ---> Using cache ---> 7ea9e71b17e8 Step 5/8 : USER ${USER_UID} ---> Using cache ---> b138d0e7be61 Step 6/8 : ARG BIN=bin/helm-operator ---> Using cache ---> 5fd7b12589b7 Step 7/8 : COPY $BIN /usr/local/bin/helm-operator ---> 72f1d6266d7d Step 8/8 : ENTRYPOINT ["/usr/local/bin/helm-operator", "run", "--watches-file=./watches.yaml"] ---> Running in 59abdce973fd Removing intermediate container 59abdce973fd ---> d0071bcbdd88 Successfully built d0071bcbdd88 Successfully tagged quay.io/operator-framework/helm-operator:dev rm -rf build/_image go test ./test/e2e-helm -v -ginkgo.v === RUN TestE2EHelm Running Suite: E2EHelm Suite ============================ Random Seed: 1604182560 Will run 4 of 4 specs STEP: creating a new test context STEP: creating a new directory preparing testing directory: /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e-gpzo STEP: fetching the current-context running: kubectl config current-context STEP: preparing the prerequisites on cluster STEP: checking API resources applied on Cluster running: kubectl api-resources STEP: initializing a Helm project running: operator-sdk init --plugins helm --project-version 3-alpha --domain example.comgpzo STEP: using dev image for scorecard-test STEP: creating an API definition running: operator-sdk create api --group bargpzo --version v1alpha1 --kind Foogpzo STEP: replacing project Dockerfile to use Helm base image with the dev tag STEP: turning off interactive prompts for all generation tasks. STEP: checking the kustomize setup running: make kustomize STEP: building the project image running: make docker-build IMG=quay.io/example/e2e-gpzo:v0.0.1 STEP: loading the required images into Kind cluster running: kind load docker-image quay.io/example/e2e-gpzo:v0.0.1 --name operator-sdk-e2e running: kind load docker-image --name operator-sdk-e2e quay.io/operator-framework/scorecard-test:dev STEP: generating the operator bundle running: make bundle IMG=quay.io/example/e2e-gpzo:v0.0.1 Running Helm projects built with operator-sdk should run correctly locally /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_local_test.go:39 STEP: installing CRD's running: make install STEP: running the project STEP: killing the project STEP: uninstalling CRD's running: make uninstall • ------------------------------ Testing Helm Projects with Scorecard with operator-sdk should work successfully with scorecard /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_scorecard_test.go:38 STEP: running basic scorecard tests running: operator-sdk scorecard bundle --selector=suite=basic --output=json --wait-time=60s STEP: running olm scorecard tests running: operator-sdk scorecard bundle --selector=suite=olm --output=json --wait-time=60s - Name: olm-crds-have-validation Expected: fail Output: fail - Name: olm-bundle-validation Expected: pass Output: pass - Name: olm-status-descriptors Expected: fail Output: fail - Name: olm-crds-have-resources Expected: fail Output: fail - Name: olm-spec-descriptors Expected: fail Output: fail • [SLOW TEST:11.879 seconds] Testing Helm Projects with Scorecard /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_scorecard_test.go:28 with operator-sdk /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_scorecard_test.go:29 should work successfully with scorecard /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_scorecard_test.go:38 ------------------------------ Running Helm projects built with operator-sdk should run correctly in a cluster /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_cluster_test.go:72 STEP: enabling Prometheus via the kustomization.yaml STEP: deploying project on the cluster running: make deploy IMG=quay.io/example/e2e-gpzo:v0.0.1 STEP: checking if the Operator project Pod is running STEP: getting the controller-manager pod name running: kubectl -n e2e-gpzo-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} STEP: ensuring the created controller-manager Pod STEP: getting the controller-manager pod name running: kubectl -n e2e-gpzo-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} STEP: ensuring the created controller-manager Pod STEP: checking the controller-manager Pod is running running: kubectl -n e2e-gpzo-system get pods e2e-gpzo-controller-manager-6674f987fc-l8jdx -o jsonpath={.status.phase} STEP: getting the controller-manager pod name running: kubectl -n e2e-gpzo-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} STEP: ensuring the created controller-manager Pod STEP: checking the controller-manager Pod is running running: kubectl -n e2e-gpzo-system get pods e2e-gpzo-controller-manager-6674f987fc-l8jdx -o jsonpath={.status.phase} STEP: getting the controller-manager pod name running: kubectl -n e2e-gpzo-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} STEP: ensuring the created controller-manager Pod STEP: checking the controller-manager Pod is running running: kubectl -n e2e-gpzo-system get pods e2e-gpzo-controller-manager-6674f987fc-l8jdx -o jsonpath={.status.phase} STEP: ensuring the created ServiceMonitor for the manager running: kubectl -n e2e-gpzo-system get ServiceMonitor e2e-gpzo-controller-manager-metrics-monitor STEP: ensuring the created metrics Service for the manager running: kubectl -n e2e-gpzo-system get Service e2e-gpzo-controller-manager-metrics-service STEP: creating an instance of release(CR) running: kubectl apply -f config/samples/bargpzo_v1alpha1_foogpzo.yaml STEP: ensuring the CR gets reconciled and the release was Installed running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager STEP: getting the release name running: kubectl get Foogpzo -o jsonpath={..status.deployedRelease.name} STEP: checking the release(CR) deployment status running: kubectl rollout status deployment foogpzo-sample STEP: ensuring the created Service for the release(CR) running: kubectl get Service -l app.kubernetes.io/instance=foogpzo-sample -o jsonpath={..metadata.name} STEP: scaling deployment replicas to 2 running: kubectl scale deployment foogpzo-sample --replicas 2 STEP: verifying the deployment automatically scales back down to 1 running: kubectl get deployment foogpzo-sample -o jsonpath={..spec.replicas} running: kubectl get deployment foogpzo-sample -o jsonpath={..spec.replicas} running: kubectl get deployment foogpzo-sample -o jsonpath={..spec.replicas} running: kubectl get deployment foogpzo-sample -o jsonpath={..spec.replicas} running: kubectl get deployment foogpzo-sample -o jsonpath={..spec.replicas} STEP: updating replicaCount to 2 in the CR manifest STEP: applying CR manifest with replicaCount: 2 running: kubectl apply -f config/samples/bargpzo_v1alpha1_foogpzo.yaml STEP: ensuring the CR gets reconciled and the release was Upgraded running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager STEP: checking Deployment replicas spec is equals 2 running: kubectl get deployment foogpzo-sample -o jsonpath={..spec.replicas} STEP: granting permissions to access the metrics and read the token running: kubectl create clusterrolebinding metrics-gpzo --clusterrole=e2e-gpzo-metrics-reader --serviceaccount=e2e-gpzo-system:default STEP: getting the token running: kubectl -n e2e-gpzo-system get secrets -o=jsonpath={.items[0].data.token} STEP: creating a pod with curl image running: kubectl -n e2e-gpzo-system run --generator=run-pod/v1 curl --image=curlimages/curl:7.68.0 --restart=OnFailure -- curl -v -k -H Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Imwyemo1TzZuaUwza3BiYmh4LWpvc0E1MXNic2JJcGExT3RXa0J3cDhtbjQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtZ3B6by1zeXN0ZW0iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi12cGhwcyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2VhOTY5OTQtNzQ2Zi00MzQxLTg3NDMtYjIzY2NiMzA5NjJlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmUyZS1ncHpvLXN5c3RlbTpkZWZhdWx0In0.WVHm3VPGBFPcyeK35QyxZUN6CeBt0_3_FxOboF4PHmId0ioH0XpotldWW7fenbp_Imc3wAYAmF886mxRnbp23X9LA9S2-eIILL7TJQWG9JohGbW5ugkQEDWNdY1jF85LMjXxjrir7C_lhZoZyKIaMWdsu_6Z7e_jsiesJIbABImKkpW9rdB8e2Rx2R0LBjQXzUXwq-WAXVKdm8asbiKoNyS-CxOioPlADj9qP5DkqeTQn0T2ro4iq9OByHh9L7xRvjSqsPtofby7U01-r_-6lMU6iwuvmVygk6B3eic-hVC7C9L1EX7LLNFByAvxUuvChZNjAwVu3PvssXlODZKnXg https://e2e-gpzo-controller-manager-metrics-service.e2e-gpzo-system.svc:8443/metrics STEP: validating the curl pod running as expected running: kubectl -n e2e-gpzo-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-gpzo-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-gpzo-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-gpzo-system get pods curl -o jsonpath={.status.phase} STEP: checking metrics endpoint serving as expected running: kubectl -n e2e-gpzo-system logs curl STEP: getting the CR namespace token running: kubectl get Foogpzo foogpzo-sample -o=jsonpath={..metadata.namespace} STEP: ensuring the operator metrics contains a `resource_created_at` metric for the CR running: kubectl -n e2e-gpzo-system logs curl STEP: deleting CR manifest running: kubectl delete -f config/samples/bargpzo_v1alpha1_foogpzo.yaml STEP: ensuring the CR gets reconciled and the release was Uninstalled running: kubectl -n e2e-gpzo-system logs e2e-gpzo-controller-manager-6674f987fc-l8jdx -c manager STEP: deleting Curl Pod created running: kubectl -n e2e-gpzo-system delete pod curl STEP: deleting CR instances created running: kubectl delete -f config/samples/bargpzo_v1alpha1_foogpzo.yaml STEP: cleaning up permissions running: kubectl delete clusterrolebinding metrics-gpzo STEP: undeploy project running: make undeploy STEP: ensuring that the namespace was deleted running: kubectl get namespace e2e-gpzo-system • [SLOW TEST:65.121 seconds] Running Helm projects /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_cluster_test.go:31 built with operator-sdk /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_cluster_test.go:34 should run correctly in a cluster /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_cluster_test.go:72 ------------------------------ Integrating Helm Projects with OLM with operator-sdk should generate and run a valid OLM bundle and packagemanifests /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_olm_test.go:28 STEP: building the operator bundle image running: make bundle-build BUNDLE_IMG=quay.io/example/e2e-gpzo-bundle:v0.0.1 STEP: loading the bundle image into Kind cluster running: kind load docker-image --name operator-sdk-e2e quay.io/example/e2e-gpzo-bundle:v0.0.1 STEP: adding the 'packagemanifests' rule to the Makefile STEP: generating the operator package manifests running: make packagemanifests IMG=quay.io/example/e2e-gpzo:v0.0.1 STEP: running the package running: operator-sdk run packagemanifests --install-mode AllNamespaces --version 0.0.1 --timeout 4m • Failure [251.203 seconds] Integrating Helm Projects with OLM /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_olm_test.go:24 with operator-sdk /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_olm_test.go:25 should generate and run a valid OLM bundle and packagemanifests [It] /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_olm_test.go:28 Unexpected error: <*errors.errorString | 0xc000176050>: { s: "operator-sdk run packagemanifests --install-mode AllNamespaces --version 0.0.1 --timeout 4m failed with error: time=\"2020-10-31T19:18:04-03:00\" level=info msg=\"Creating e2e-gpzo registry\"\ntime=\"2020-10-31T19:18:04-03:00\" level=info msg=\" Creating ConfigMap \\\"default/e2e-gpzo-registry-manifests-package\\\"\"\ntime=\"2020-10-31T19:18:04-03:00\" level=info msg=\" Creating ConfigMap \\\"default/e2e-gpzo-registry-manifests-0-0-1\\\"\"\ntime=\"2020-10-31T19:18:04-03:00\" level=info msg=\" Creating Deployment \\\"default/e2e-gpzo-registry-server\\\"\"\ntime=\"2020-10-31T19:18:04-03:00\" level=info msg=\" Creating Service \\\"default/e2e-gpzo-registry-server\\\"\"\ntime=\"2020-10-31T19:18:04-03:00\" level=info msg=\"Waiting for Deployment \\\"default/e2e-gpzo-registry-server\\\" rollout to complete\"\ntime=\"2020-10-31T19:18:04-03:00\" level=info msg=\"Waiting for Deployment \\\"default/e2e-gpzo-registry-server\\\" to rollout: waiting for deployment spec update to be observed\"\ntime=\"2020-10-31T19:18:05-03:00\" level=info msg=\" Waiting for Deployment \\\"default/e2e-gpzo-registry-server\\\" to rollout: 0 of 1 updated replicas are available\"\ntime=\"2020-10-31T19:18:14-03:00\" level=info msg=\" Deployment \\\"default/e2e-gpzo-registry-server\\\" successfully rolled out\"\ntime=\"2020-10-31T19:18:14-03:00\" level=info msg=\"Created CatalogSource: e2e-gpzo-catalog\"\ntime=\"2020-10-31T19:18:15-03:00\" level=info msg=\"Created Subscription: e2e-gpzo-v0-0-1-sub\"\ntime=\"2020-10-31T19:22:03-03:00\" level=fatal msg=\"Failed to run packagemanifests: install plan is not available for the subscription e2e-gpzo-v0-0-1-sub: Get \\\"https://127.0.0.1:53721/apis/operators.coreos.com/v1alpha1/namespaces/default/subscriptions/e2e-gpzo-v0-0-1-sub\\\": context deadline exceeded\\n\"\n", } operator-sdk run packagemanifests --install-mode AllNamespaces --version 0.0.1 --timeout 4m failed with error: time="2020-10-31T19:18:04-03:00" level=info msg="Creating e2e-gpzo registry" time="2020-10-31T19:18:04-03:00" level=info msg=" Creating ConfigMap \"default/e2e-gpzo-registry-manifests-package\"" time="2020-10-31T19:18:04-03:00" level=info msg=" Creating ConfigMap \"default/e2e-gpzo-registry-manifests-0-0-1\"" time="2020-10-31T19:18:04-03:00" level=info msg=" Creating Deployment \"default/e2e-gpzo-registry-server\"" time="2020-10-31T19:18:04-03:00" level=info msg=" Creating Service \"default/e2e-gpzo-registry-server\"" time="2020-10-31T19:18:04-03:00" level=info msg="Waiting for Deployment \"default/e2e-gpzo-registry-server\" rollout to complete" time="2020-10-31T19:18:04-03:00" level=info msg="Waiting for Deployment \"default/e2e-gpzo-registry-server\" to rollout: waiting for deployment spec update to be observed" time="2020-10-31T19:18:05-03:00" level=info msg=" Waiting for Deployment \"default/e2e-gpzo-registry-server\" to rollout: 0 of 1 updated replicas are available" time="2020-10-31T19:18:14-03:00" level=info msg=" Deployment \"default/e2e-gpzo-registry-server\" successfully rolled out" time="2020-10-31T19:18:14-03:00" level=info msg="Created CatalogSource: e2e-gpzo-catalog" time="2020-10-31T19:18:15-03:00" level=info msg="Created Subscription: e2e-gpzo-v0-0-1-sub" time="2020-10-31T19:22:03-03:00" level=fatal msg="Failed to run packagemanifests: install plan is not available for the subscription e2e-gpzo-v0-0-1-sub: Get \"https://127.0.0.1:53721/apis/operators.coreos.com/v1alpha1/namespaces/default/subscriptions/e2e-gpzo-v0-0-1-sub\": context deadline exceeded\n" occurred /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_olm_test.go:53 ------------------------------ STEP: uninstalling prerequisites STEP: uninstalling Prometheus running: kubectl delete -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml error when running kubectl delete during cleaning up prometheus bundle: kubectl delete -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml failed with error: Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": clusterrolebindings.rbac.authorization.k8s.io "prometheus-operator" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": clusterroles.rbac.authorization.k8s.io "prometheus-operator" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": deployments.apps "prometheus-operator" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": serviceaccounts "prometheus-operator" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": services "prometheus-operator" not found STEP: uninstalling OLM running: operator-sdk olm uninstall warning: error when uninstalling OLM: operator-sdk olm uninstall failed with error: time="2020-10-31T19:22:05-03:00" level=fatal msg="Failed to uninstall OLM: error getting installed OLM version (set --version to override the default version): no existing installation found" STEP: destroying container image and work dir running: docker rmi -f quay.io/example/e2e-gpzo:v0.0.1 Summarizing 1 Failure: [Fail] Integrating Helm Projects with OLM with operator-sdk [It] should generate and run a valid OLM bundle and packagemanifests /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-helm/e2e_helm_olm_test.go:53 Ran 4 of 4 Specs in 365.541 seconds FAIL! -- 3 Passed | 1 Failed | 0 Pending | 0 Skipped --- FAIL: TestE2EHelm (365.54s) FAIL FAIL github.com/operator-framework/operator-sdk/test/e2e-helm 366.019s FAIL make: *** [Makefile:147: test-e2e-helm] Error 1
`make test-e2e-ansible` $ make test-e2e-ansible go build -gcflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -ldflags " -X 'github.com/operator-framework/operator-sdk/internal/version.Version=v1.1.0+git' -X 'github.com/operator-framework/operator-sdk/internal/version.GitVersion=v1.1.0-7-gcc409fb3' -X 'github.com/operator-framework/operator-sdk/internal/version.GitCommit=cc409fb3b0b8785a794e503e2d2238fd950652ce' -X 'github.com/operator-framework/operator-sdk/internal/version.KubernetesVersion=v1.18.8' " -o build ./cmd/{operator-sdk,ansible-operator,helm-operator} tools/scripts/fetch kind 0.9.0 tools/scripts/fetch envtest 0.6.3 tools/scripts/fetch kubectl 1.18.8 # Install kubectl AFTER envtest because envtest includes its own kubectl binary [[ "`tools/bin/kind get clusters`" =~ "operator-sdk-e2e" ]] || tools/bin/kind create cluster --image="kindest/node:v1.18.8" --name operator-sdk-e2e go build -gcflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -o build/_image/scorecard-test ./images/scorecard-test mkdir -p ./images/scorecard-test/bin && mv build/_image/scorecard-test ./images/scorecard-test/bin docker build -t quay.io/operator-framework/scorecard-test:dev -f ./images/scorecard-test/Dockerfile ./images/scorecard-test Sending build context to Docker daemon 45.91MB Step 1/8 : FROM registry.access.redhat.com/ubi8/ubi-minimal:latest ---> 28095021e526 Step 2/8 : ENV HOME=/opt/scorecard-test USER_NAME=scorecard-test USER_UID=1001 ---> Using cache ---> 05c0cf94e89d Step 3/8 : RUN echo "${USER_NAME}:x:${USER_UID}:0:${USER_NAME} user:${HOME}:/sbin/nologin" >> /etc/passwd ---> Using cache ---> def95b3859af Step 4/8 : WORKDIR ${HOME} ---> Using cache ---> 8667c6ce0750 Step 5/8 : ARG BIN=bin/scorecard-test ---> Using cache ---> a8bd4c5f05a7 Step 6/8 : COPY $BIN /usr/local/bin/scorecard-test ---> Using cache ---> 067257a8e189 Step 7/8 : ENTRYPOINT ["/usr/local/bin/scorecard-test"] ---> Using cache ---> c7b2662b4921 Step 8/8 : USER ${USER_UID} ---> Using cache ---> 4227e78b2a78 Successfully built 4227e78b2a78 Successfully tagged quay.io/operator-framework/scorecard-test:dev rm -rf build/_image go build -gcflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/camilamacedo/go/src/github.com/operator-framework" -ldflags " -X 'github.com/operator-framework/operator-sdk/internal/version.Version=v1.1.0+git' -X 'github.com/operator-framework/operator-sdk/internal/version.GitVersion=v1.1.0-7-gcc409fb3' -X 'github.com/operator-framework/operator-sdk/internal/version.GitCommit=cc409fb3b0b8785a794e503e2d2238fd950652ce' -X 'github.com/operator-framework/operator-sdk/internal/version.KubernetesVersion=v1.18.8' " -o build/_image/ansible-operator ./cmd/ansible-operator mkdir -p ./images/ansible-operator/bin && mv build/_image/ansible-operator ./images/ansible-operator/bin docker build -t quay.io/operator-framework/ansible-operator:dev -f ./images/ansible-operator/Dockerfile ./images/ansible-operator Sending build context to Docker daemon 47.59MB Step 1/11 : FROM registry.access.redhat.com/ubi8/ubi:latest ---> ecbc6f53bba0 Step 2/11 : RUN mkdir -p /etc/ansible && echo "localhost ansible_connection=local" > /etc/ansible/hosts && echo '[defaults]' > /etc/ansible/ansible.cfg && echo 'roles_path = /opt/ansible/roles' >> /etc/ansible/ansible.cfg && echo 'library = /usr/share/ansible/openshift' >> /etc/ansible/ansible.cfg ---> Using cache ---> cf4e6ba38b7e Step 3/11 : ENV HOME=/opt/ansible USER_NAME=ansible USER_UID=1001 ---> Using cache ---> 322349ac6922 Step 4/11 : RUN yum clean all && rm -rf /var/cache/yum/* && yum -y update && yum install -y libffi-devel openssl-devel python36-devel gcc python3-pip python3-setuptools && pip3 install --no-cache-dir ipaddress ansible-runner==1.3.4 ansible-runner-http==1.0.0 openshift~=0.10.0 ansible~=2.9 jmespath && yum remove -y gcc libffi-devel openssl-devel python36-devel && yum clean all && rm -rf /var/cache/yum ---> Using cache ---> 04fb436750f5 Step 5/11 : RUN echo "${USER_NAME}:x:${USER_UID}:0:${USER_NAME} user:${HOME}:/sbin/nologin" >> /etc/passwd && mkdir -p ${HOME}/.ansible/tmp && chown -R ${USER_UID}:0 ${HOME} && chmod -R ug+rwx ${HOME} ---> Using cache ---> fa85931efb44 Step 6/11 : RUN TINIARCH=$(case $(arch) in x86_64) echo -n amd64 ;; ppc64le) echo -n ppc64el ;; aarch64) echo -n arm64 ;; *) echo -n $(arch) ;; esac) && curl -L -o /tini https://github.com/krallin/tini/releases/latest/download/tini-$TINIARCH && chmod +x /tini ---> Using cache ---> e0eb904bf014 Step 7/11 : WORKDIR ${HOME} ---> Using cache ---> ebb63af3af85 Step 8/11 : USER ${USER_UID} ---> Using cache ---> 90eabf1432a4 Step 9/11 : ARG BIN=bin/ansible-operator ---> Using cache ---> 43a5097ef691 Step 10/11 : COPY $BIN /usr/local/bin/ansible-operator ---> Using cache ---> 987a44313bbf Step 11/11 : ENTRYPOINT ["/tini", "--", "/usr/local/bin/ansible-operator", "run", "--watches-file=./watches.yaml"] ---> Using cache ---> 90281e7bb443 Successfully built 90281e7bb443 Successfully tagged quay.io/operator-framework/ansible-operator:dev rm -rf build/_image go test -count=1 ./internal/ansible/proxy/... ok github.com/operator-framework/operator-sdk/internal/ansible/proxy 3.094s ? github.com/operator-framework/operator-sdk/internal/ansible/proxy/controllermap [no test files] ? github.com/operator-framework/operator-sdk/internal/ansible/proxy/kubeconfig [no test files] ? github.com/operator-framework/operator-sdk/internal/ansible/proxy/requestfactory [no test files] go test ./test/e2e-ansible -v -ginkgo.v === RUN TestE2EAnsible Running Suite: E2EAnsible Suite =============================== Random Seed: 1604184646 Will run 4 of 4 specs STEP: creating a new test context STEP: creating the repository preparing testing directory: /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e-hdva STEP: fetching the current-context running: kubectl config current-context STEP: preparing the prerequisites on cluster STEP: checking API resources applied on Cluster running: kubectl api-resources STEP: setting domain and GVK STEP: initializing a ansible project running: operator-sdk init --plugins ansible --project-version 3-alpha --domain example.com STEP: using dev image for scorecard-test STEP: creating the Memcached API running: operator-sdk create api --group ansible --version v1alpha1 --kind Memcached --generate-playbook --generate-role STEP: replacing project Dockerfile to use ansible base image with the dev tag STEP: adding Memcached mock task to the role STEP: setting defaults to Memcached STEP: updating Memcached sample STEP: creating an API definition to add a task to delete the config map running: operator-sdk create api --group ansible --version v1alpha1 --kind Memfin --generate-role STEP: adding task to delete config map STEP: adding to watches finalizer and blacklist STEP: create API to test watching multiple GVKs running: operator-sdk create api --group ansible --version v1alpha1 --kind Foo --generate-role STEP: adding RBAC permissions for the Memcached Kind STEP: turning off interactive prompts for all generation tasks. STEP: checking the kustomize setup running: make kustomize STEP: building the project image running: make docker-build IMG=quay.io/example/e2e-hdva:v0.0.1 STEP: loading the required images into Kind cluster running: kind load docker-image quay.io/example/e2e-hdva:v0.0.1 --name operator-sdk-e2e running: kind load docker-image --name operator-sdk-e2e quay.io/operator-framework/scorecard-test:dev STEP: building the bundle running: make bundle IMG=quay.io/example/e2e-hdva:v0.0.1 Running ansible projects built with operator-sdk should run correctly locally /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_local_test.go:39 STEP: installing CRD's running: make install STEP: running the project STEP: killing the project STEP: uninstalling CRD's running: make uninstall • [SLOW TEST:6.113 seconds] Running ansible projects /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_local_test.go:24 built with operator-sdk /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_local_test.go:25 should run correctly locally /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_local_test.go:39 ------------------------------ Testing Ansible Projects with Scorecard with operator-sdk should work successfully with scorecard /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_scorecard_test.go:37 STEP: running basic scorecard tests running: operator-sdk scorecard bundle --selector=suite=basic --output=json --wait-time=60s STEP: running olm scorecard tests running: operator-sdk scorecard bundle --selector=suite=olm --output=json --wait-time=60s - Name: olm-status-descriptors Expected: fail Output: fail - Name: olm-crds-have-validation Expected: fail Output: fail - Name: olm-spec-descriptors Expected: fail Output: fail - Name: olm-crds-have-resources Expected: fail Output: fail - Name: olm-bundle-validation Expected: pass Output: pass • [SLOW TEST:18.333 seconds] Testing Ansible Projects with Scorecard /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_scorecard_test.go:27 with operator-sdk /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_scorecard_test.go:28 should work successfully with scorecard /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_scorecard_test.go:37 ------------------------------ Integrating ansible Projects with OLM with operator-sdk should generate and run a valid OLM bundle and packagemanifests /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_olm_test.go:28 STEP: turning off interactive prompts for all generation tasks. STEP: building the bundle running: make bundle IMG=quay.io/example/e2e-hdva:v0.0.1 STEP: building the operator bundle image running: make bundle-build BUNDLE_IMG=quay.io/example/e2e-hdva-bundle:v0.0.1 STEP: loading the bundle image into Kind cluster running: kind load docker-image --name operator-sdk-e2e quay.io/example/e2e-hdva-bundle:v0.0.1 STEP: adding the 'packagemanifests' rule to the Makefile STEP: generating the operator package manifests running: make packagemanifests IMG=quay.io/example/e2e-hdva:v0.0.1 STEP: running the package running: operator-sdk run packagemanifests --install-mode AllNamespaces --version 0.0.1 --timeout 4m • Failure [257.639 seconds] Integrating ansible Projects with OLM /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_olm_test.go:24 with operator-sdk /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_olm_test.go:25 should generate and run a valid OLM bundle and packagemanifests [It] /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_olm_test.go:28 Unexpected error: <*errors.errorString | 0xc00008f490>: { s: "operator-sdk run packagemanifests --install-mode AllNamespaces --version 0.0.1 --timeout 4m failed with error: time=\"2020-10-31T19:53:58-03:00\" level=info msg=\"Creating e2e-hdva registry\"\ntime=\"2020-10-31T19:53:58-03:00\" level=info msg=\" Creating ConfigMap \\\"default/e2e-hdva-registry-manifests-package\\\"\"\ntime=\"2020-10-31T19:53:58-03:00\" level=info msg=\" Creating ConfigMap \\\"default/e2e-hdva-registry-manifests-0-0-1\\\"\"\ntime=\"2020-10-31T19:53:58-03:00\" level=info msg=\" Creating Deployment \\\"default/e2e-hdva-registry-server\\\"\"\ntime=\"2020-10-31T19:53:58-03:00\" level=info msg=\" Creating Service \\\"default/e2e-hdva-registry-server\\\"\"\ntime=\"2020-10-31T19:54:00-03:00\" level=info msg=\"Waiting for Deployment \\\"default/e2e-hdva-registry-server\\\" rollout to complete\"\ntime=\"2020-10-31T19:54:00-03:00\" level=info msg=\" Waiting for Deployment \\\"default/e2e-hdva-registry-server\\\" to rollout: 0 out of 1 new replicas have been updated\"\ntime=\"2020-10-31T19:54:01-03:00\" level=info msg=\" Waiting for Deployment \\\"default/e2e-hdva-registry-server\\\" to rollout: 0 of 1 updated replicas are available\"\ntime=\"2020-10-31T19:54:08-03:00\" level=info msg=\" Deployment \\\"default/e2e-hdva-registry-server\\\" successfully rolled out\"\ntime=\"2020-10-31T19:54:08-03:00\" level=info msg=\"Created CatalogSource: e2e-hdva-catalog\"\ntime=\"2020-10-31T19:54:09-03:00\" level=info msg=\"Created Subscription: e2e-hdva-v0-0-1-sub\"\ntime=\"2020-10-31T19:57:58-03:00\" level=fatal msg=\"Failed to run packagemanifests: install plan is not available for the subscription e2e-hdva-v0-0-1-sub: timed out waiting for the condition\\n\"\n", } operator-sdk run packagemanifests --install-mode AllNamespaces --version 0.0.1 --timeout 4m failed with error: time="2020-10-31T19:53:58-03:00" level=info msg="Creating e2e-hdva registry" time="2020-10-31T19:53:58-03:00" level=info msg=" Creating ConfigMap \"default/e2e-hdva-registry-manifests-package\"" time="2020-10-31T19:53:58-03:00" level=info msg=" Creating ConfigMap \"default/e2e-hdva-registry-manifests-0-0-1\"" time="2020-10-31T19:53:58-03:00" level=info msg=" Creating Deployment \"default/e2e-hdva-registry-server\"" time="2020-10-31T19:53:58-03:00" level=info msg=" Creating Service \"default/e2e-hdva-registry-server\"" time="2020-10-31T19:54:00-03:00" level=info msg="Waiting for Deployment \"default/e2e-hdva-registry-server\" rollout to complete" time="2020-10-31T19:54:00-03:00" level=info msg=" Waiting for Deployment \"default/e2e-hdva-registry-server\" to rollout: 0 out of 1 new replicas have been updated" time="2020-10-31T19:54:01-03:00" level=info msg=" Waiting for Deployment \"default/e2e-hdva-registry-server\" to rollout: 0 of 1 updated replicas are available" time="2020-10-31T19:54:08-03:00" level=info msg=" Deployment \"default/e2e-hdva-registry-server\" successfully rolled out" time="2020-10-31T19:54:08-03:00" level=info msg="Created CatalogSource: e2e-hdva-catalog" time="2020-10-31T19:54:09-03:00" level=info msg="Created Subscription: e2e-hdva-v0-0-1-sub" time="2020-10-31T19:57:58-03:00" level=fatal msg="Failed to run packagemanifests: install plan is not available for the subscription e2e-hdva-v0-0-1-sub: timed out waiting for the condition\n" occurred /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_olm_test.go:61 ------------------------------ Running ansible projects built with operator-sdk should run correctly in a cluster /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_cluster_test.go:84 STEP: checking samples STEP: enabling Prometheus via the kustomization.yaml STEP: deploying project on the cluster running: make deploy IMG=quay.io/example/e2e-hdva:v0.0.1 STEP: checking if the Operator project Pod is running STEP: getting the controller-manager pod name running: kubectl -n e2e-hdva-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} STEP: ensuring the created controller-manager Pod STEP: checking the controller-manager Pod is running running: kubectl -n e2e-hdva-system get pods e2e-hdva-controller-manager-5d6486776c-nmcg9 -o jsonpath={.status.phase} STEP: getting the controller-manager pod name running: kubectl -n e2e-hdva-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} STEP: ensuring the created controller-manager Pod STEP: checking the controller-manager Pod is running running: kubectl -n e2e-hdva-system get pods e2e-hdva-controller-manager-5d6486776c-nmcg9 -o jsonpath={.status.phase} STEP: getting the controller-manager pod name running: kubectl -n e2e-hdva-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} STEP: ensuring the created controller-manager Pod STEP: checking the controller-manager Pod is running running: kubectl -n e2e-hdva-system get pods e2e-hdva-controller-manager-5d6486776c-nmcg9 -o jsonpath={.status.phase} STEP: getting the controller-manager pod name running: kubectl -n e2e-hdva-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} STEP: ensuring the created controller-manager Pod STEP: checking the controller-manager Pod is running running: kubectl -n e2e-hdva-system get pods e2e-hdva-controller-manager-5d6486776c-nmcg9 -o jsonpath={.status.phase} STEP: getting the controller-manager pod name running: kubectl -n e2e-hdva-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} STEP: ensuring the created controller-manager Pod STEP: checking the controller-manager Pod is running running: kubectl -n e2e-hdva-system get pods e2e-hdva-controller-manager-5d6486776c-nmcg9 -o jsonpath={.status.phase} STEP: ensuring the created ServiceMonitor for the manager running: kubectl -n e2e-hdva-system get ServiceMonitor e2e-hdva-controller-manager-metrics-monitor STEP: ensuring the created metrics Service for the manager running: kubectl -n e2e-hdva-system get Service e2e-hdva-controller-manager-metrics-service STEP: create custom resource (Memcached CR) running: kubectl apply -f /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e-hdva/config/samples/ansible_v1alpha1_memcached.yaml STEP: create custom resource (Foo CR) running: kubectl apply -f /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e-hdva/config/samples/ansible_v1alpha1_foo.yaml STEP: create custom resource (Memfin CR) running: kubectl apply -f /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e-hdva/config/samples/ansible_v1alpha1_memfin.yaml STEP: ensuring the CR gets reconciled running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager running: kubectl -n e2e-hdva-system logs e2e-hdva-controller-manager-5d6486776c-nmcg9 -c manager STEP: deleting Curl Pod created running: kubectl delete pod curl STEP: deleting CR instances created running: kubectl delete -f /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e-hdva/config/samples/ansible_v1alpha1_memcached.yaml panic: test timed out after 10m0s goroutine 114 [running]: testing.(*M).startAlarm.func1() /usr/local/go/src/testing/testing.go:1628 +0xe5 created by time.goFunc /usr/local/go/src/time/sleep.go:167 +0x45 goroutine 1 [chan receive]: testing.(*T).Run(0xc000001b00, 0x16b4ee1, 0xe, 0x16e62a8, 0x1092a66) /usr/local/go/src/testing/testing.go:1179 +0x3ad testing.runTests.func1(0xc000001980) /usr/local/go/src/testing/testing.go:1449 +0x78 testing.tRunner(0xc000001980, 0xc00019dde0) /usr/local/go/src/testing/testing.go:1127 +0xef testing.runTests(0xc0001f07e0, 0x1b69350, 0x1, 0x1, 0xbfdf990798d97340, 0x8bb31885e8, 0x1b8e420, 0x100d6f0) /usr/local/go/src/testing/testing.go:1447 +0x2e8 testing.(*M).Run(0xc00017ea80, 0x0) /usr/local/go/src/testing/testing.go:1357 +0x245 main.main() _testmain.go:43 +0x138 goroutine 6 [chan receive]: k8s.io/klog.(*loggingT).flushDaemon(0x1b8e720) /Users/camilamacedo/go/pkg/mod/k8s.io/klog@v1.0.0/klog.go:1010 +0x8b created by k8s.io/klog.init.0 /Users/camilamacedo/go/pkg/mod/k8s.io/klog@v1.0.0/klog.go:411 +0xd8 goroutine 7 [syscall]: syscall.syscall6(0x1085000, 0x431b, 0xc00019ce64, 0x0, 0xc000524000, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/runtime/sys_darwin.go:85 +0x2e syscall.wait4(0x431b, 0xc00019ce64, 0x0, 0xc000524000, 0x90, 0x16935e0, 0xc00008c001) /usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x87 syscall.Wait4(0x431b, 0xc00019ceb4, 0x0, 0xc000524000, 0x0, 0x1, 0xc00019cf48) /usr/local/go/src/syscall/syscall_bsd.go:129 +0x51 os.(*Process).wait(0xc00008c030, 0x16e6a58, 0x16e6a60, 0x16e6a50) /usr/local/go/src/os/exec_unix.go:43 +0x85 os.(*Process).Wait(...) /usr/local/go/src/os/exec.go:125 os/exec.(*Cmd).Wait(0xc000238000, 0x0, 0x0) /usr/local/go/src/os/exec/exec.go:507 +0x65 os/exec.(*Cmd).Run(0xc000238000, 0xc0000a6090, 0xc000090000) /usr/local/go/src/os/exec/exec.go:341 +0x5c os/exec.(*Cmd).CombinedOutput(0xc000238000, 0xc00001e980, 0x16b3d75, 0xc, 0xc00019d058, 0x1) /usr/local/go/src/os/exec/exec.go:567 +0x91 sigs.k8s.io/kubebuilder/test/e2e/utils.(*CmdContext).Run(0xc00009a000, 0xc000238000, 0xc0000a6030, 0x3, 0x3, 0xc000238000, 0x0) /Users/camilamacedo/go/pkg/mod/sigs.k8s.io/kubebuilder@v1.0.9-0.20201021204649-36124ae2e027/test/e2e/utils/test_context.go:222 +0x225 sigs.k8s.io/kubebuilder/test/e2e/utils.(*Kubectl).Command(0xc000096080, 0xc0000a6030, 0x3, 0x3, 0x2, 0x2, 0x1, 0x3) /Users/camilamacedo/go/pkg/mod/sigs.k8s.io/kubebuilder@v1.0.9-0.20201021204649-36124ae2e027/test/e2e/utils/kubectl.go:34 +0x73 sigs.k8s.io/kubebuilder/test/e2e/utils.(*Kubectl).Delete(0xc000096080, 0x0, 0xc00019d208, 0x2, 0x2, 0xc00051a040, 0x34, 0x1768080, 0xc0001c4050) /Users/camilamacedo/go/pkg/mod/sigs.k8s.io/kubebuilder@v1.0.9-0.20201021204649-36124ae2e027/test/e2e/utils/kubectl.go:76 +0x153 github.com/operator-framework/operator-sdk/test/e2e-ansible_test.glob..func1.1.2() /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_cluster_test.go:62 +0x1a5 github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000033140, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/leafnodes/runner.go:113 +0xa3 github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000033140, 0xc000331496, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/leafnodes/runner.go:64 +0xd7 github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00000e310, 0x17687e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/leafnodes/setup_nodes.go:15 +0x67 github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc00019d848, 0xc0002962d0, 0x17687e0, 0xc00001e980) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/spec/spec.go:180 +0x36e github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0002962d0, 0x0, 0x17687e0, 0xc00001e980) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/spec/spec.go:218 +0x749 github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0002962d0, 0x17687e0, 0xc00001e980) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/spec/spec.go:138 +0xf2 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0001e5540, 0xc0002962d0, 0x0) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/specrunner/spec_runner.go:200 +0x111 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0001e5540, 0x1) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/specrunner/spec_runner.go:170 +0x127 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0001e5540, 0xc000027cc8) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/specrunner/spec_runner.go:66 +0x117 github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000076280, 0xbb3bf00, 0xc000001b00, 0x16b61b7, 0x10, 0xc0001c4ad0, 0x1, 0x1, 0x17783a0, 0xc00001e980, ...) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/suite/suite.go:62 +0x426 github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x1768f20, 0xc000001b00, 0x16b61b7, 0x10, 0xc000058f18, 0x1, 0x1, 0x100e058) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/ginkgo_dsl.go:226 +0x238 github.com/onsi/ginkgo.RunSpecs(0x1768f20, 0xc000001b00, 0x16b61b7, 0x10, 0x5f9dea46) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/ginkgo_dsl.go:207 +0x168 github.com/operator-framework/operator-sdk/test/e2e-ansible_test.TestE2EAnsible(0xc000001b00) /Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_suite_test.go:35 +0xc9 testing.tRunner(0xc000001b00, 0x16e62a8) /usr/local/go/src/testing/testing.go:1127 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1178 +0x386 goroutine 8 [chan receive]: github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).registerForInterrupts(0xc0001e5540, 0xc000030de0) /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/specrunner/spec_runner.go:223 +0xce created by github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run /Users/camilamacedo/go/pkg/mod/github.com/onsi/ginkgo@v1.12.1/internal/specrunner/spec_runner.go:60 +0x86 goroutine 10 [syscall]: os/signal.signal_recv(0x0) /usr/local/go/src/runtime/sigqueue.go:144 +0x9d os/signal.loop() /usr/local/go/src/os/signal/signal_unix.go:23 +0x25 created by os/signal.Notify.func1.1 /usr/local/go/src/os/signal/signal.go:150 +0x45 goroutine 113 [IO wait]: internal/poll.runtime_pollWait(0xbb3e668, 0x72, 0x17699e0) /usr/local/go/src/runtime/netpoll.go:220 +0x55 internal/poll.(*pollDesc).wait(0xc000428078, 0x72, 0xc0000a1201, 0x5c7, 0x5c7) /usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45 internal/poll.(*pollDesc).waitRead(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:92 internal/poll.(*FD).Read(0xc000428060, 0xc0000a1239, 0x5c7, 0x5c7, 0x0, 0x0, 0x0) /usr/local/go/src/internal/poll/fd_unix.go:159 +0x1b1 os.(*File).read(...) /usr/local/go/src/os/file_posix.go:31 os.(*File).Read(0xc000514010, 0xc0000a1239, 0x5c7, 0x5c7, 0x39, 0x0, 0x0) /usr/local/go/src/os/file.go:116 +0x71 bytes.(*Buffer).ReadFrom(0xc0000a6090, 0x1768ce0, 0xc000514010, 0xbb51028, 0xc0000a6090, 0x1) /usr/local/go/src/bytes/buffer.go:204 +0xb1 io.copyBuffer(0x1767de0, 0xc0000a6090, 0x1768ce0, 0xc000514010, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/io/io.go:395 +0x2ff io.Copy(...) /usr/local/go/src/io/io.go:368 os/exec.(*Cmd).writerDescriptor.func1(0x0, 0x0) /usr/local/go/src/os/exec/exec.go:311 +0x65 os/exec.(*Cmd).Start.func1(0xc000238000, 0xc0002020a0) /usr/local/go/src/os/exec/exec.go:441 +0x27 created by os/exec.(*Cmd).Start /usr/local/go/src/os/exec/exec.go:440 +0x629 FAIL github.com/operator-framework/operator-sdk/test/e2e-ansible 600.210s FAIL make: *** [Makefile:143: test-e2e-ansible] Error 1

Environment

Kubernetes cluster type:

Kind

$ operator-sdk version

 operator-sdk version
operator-sdk version: "v1.1.0-7-gcc409fb3", commit: "cc409fb3b0b8785a794e503e2d2238fd950652ce", kubernetes version: "v1.18.8", go version: "go1.15.2 darwin/amd64", GOOS: "darwin", GOARCH: "amd64"

$ go version

go1.15

Additional context

It shows that the issue starts to occur after the changes in the Makefile. (master)

It might be related to the fact that now the target try always to do the same setup that is done in the Travis CI locally which was not done before, see:

tools/scripts/fetch kind 0.9.0 tools/scripts/fetch envtest 0.6.3 tools/scripts/fetch kubectl 1.18.8 # Install kubectl AFTER envtest because envtest includes its own kubectl binary [[ "tools/bin/kind get clusters" =~ "operator-sdk-e2e" ]] || tools/bin/kind create cluster --image="kindest/node:v1.18.8" --name operator-sdk-e2e

camilamacedo86 commented 4 years ago

c/c @estroz @joelanford

jberkhahn commented 4 years ago

So here's why my tests are failing. I just run make test-all. It runs for a bit, and starts to fail here, which looks to me like part of the sanity tests:

INFO[0048] integrating project with OLM                 
running: make bundle IMG=quay.io/example/memcached-operator:v0.0.1
running: make bundle-build BUNDLE_IMG=quay.io/example/memcached-operator-bundle:v0.0.1
go mod tidy
go fmt ./...
git diff --exit-code # fast-fail if generate or fix produced changes
diff --git a/testdata/ansible/memcached-operator/bundle.Dockerfile b/testdata/ansible/memcached-operator/bundle.Dockerfile
index 4f7ff821..a139efb1 100644
--- a/testdata/ansible/memcached-operator/bundle.Dockerfile
+++ b/testdata/ansible/memcached-operator/bundle.Dockerfile
@@ -9,4 +9,3 @@ LABEL operators.operatorframework.io.test.config.v1=tests/scorecard/
 LABEL operators.operatorframework.io.test.mediatype.v1=scorecard+v1
 COPY bundle/manifests /manifests/
 COPY bundle/metadata /metadata/
-COPY bundle/tests/scorecard /tests/scorecard/
diff --git a/testdata/ansible/memcached-operator/bundle/manifests/cache.example.com_memcacheds.yaml b/testdata/ansible/memcached-operator/bundle/manifests/cache.example.com_memcacheds.yaml
deleted file mode 100644
index a24f806d..00000000
--- a/testdata/ansible/memcached-operator/bundle/manifests/cache.example.com_memcacheds.yaml
+++ /dev/null
@@ -1,46 +0,0 @@
-apiVersion: apiextensions.k8s.io/v1
-kind: CustomResourceDefinition
-metadata:
-  creationTimestamp: null
-  name: memcacheds.cache.example.com
-spec:
-  group: cache.example.com
-  names:
-    kind: Memcached
-    listKind: MemcachedList
-    plural: memcacheds
-    singular: memcached
-  scope: Namespaced
-  versions:
-  - name: v1alpha1
-    schema:
-      openAPIV3Schema:
-        description: Memcached is the Schema for the memcacheds API
-        properties:
-          apiVersion:
-            description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
-            type: string
-          kind:
-            description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
-            type: string
-          metadata:
-            type: object
-          spec:
-            description: Spec defines the desired state of Memcached
-            type: object
-            x-kubernetes-preserve-unknown-fields: true
-          status:
-            description: Status defines the observed state of Memcached
-            type: object
-            x-kubernetes-preserve-unknown-fields: true
-        type: object
-    served: true
-    storage: true
-    subresources:
-      status: {}
-status:
-  acceptedNames:
-    kind: ""
-    plural: ""
-  conditions: null
-  storedVersions: null

and a bunch more similar errors after that. I suspect this isn't actually an error with the testing, but rather an error with some way my system is configured, but I can't figure it out.

estroz commented 4 years ago

@jberkhahn can you paste your full logs in an details box so I can see what else is happening:

<details><summary>Output</summary><br><pre>

your output

</pre></details>
jberkhahn commented 4 years ago
Output
jberkhahn@Purgatory> make test-all
go build -gcflags "all=-trimpath=/Users/jberkhahn/workspace/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/jberkhahn/workspace/go/src/github.com/operator-framework" -ldflags " -X 'github.com/operator-framework/operator-sdk/internal/version.Version=v1.2.0+git' -X 'github.com/operator-framework/operator-sdk/internal/version.GitVersion=v1.2.0-4-gf20ea9e9' -X 'github.com/operator-framework/operator-sdk/internal/version.GitCommit=f20ea9e9496be3dab34a3652ec7cc7016ef1ccf2' -X 'github.com/operator-framework/operator-sdk/internal/version.KubernetesVersion=v1.18.8' -X 'github.com/operator-framework/operator-sdk/internal/version.ImageVersion=v1.2.0' "  -o build ./cmd/{operator-sdk,ansible-operator,helm-operator}
go run ./hack/generate/cncf-maintainers/main.go
go run ./hack/generate/cli-doc/gen-cli-doc.go
go run ./hack/generate/samples/generate_testdata.go
INFO[0000] using the path: (/Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/testdata) 
INFO[0000] creating Helm Memcached Sample               
INFO[0000] destroying directory for memcached helm samples 
running: docker rmi -f quay.io/example/memcached-operator:v0.0.1
INFO[0000] creating directory                           
preparing testing directory: /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/testdata/helm/memcached-operator
INFO[0000] setting domain and GVK                       
INFO[0000] creating the project                         
running: operator-sdk init --plugins helm --domain example.com
INFO[0001] handling work path to get helm chart mock data 
INFO[0001] using the helm chart in: (/Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/hack/generate/samples/internal/helm/testdata/memcached-0.0.1.tgz) 
running: operator-sdk create api --group cache --version v1alpha1 --kind Memcached --helm-chart /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/hack/generate/samples/internal/helm/testdata/memcached-0.0.1.tgz
INFO[0001] customizing the sample                       
INFO[0001] enabling prometheus metrics                  
INFO[0001] adding customized roles                      
INFO[0001] integrating project with OLM                 
running: make bundle IMG=quay.io/example/memcached-operator:v0.0.1
running: make bundle-build BUNDLE_IMG=quay.io/example/memcached-operator-bundle:v0.0.1
INFO[0002] creating Ansible Memcached Sample            
INFO[0002] destroying directory for memcached Ansible samples 
running: docker rmi -f quay.io/example/memcached-operator:v0.0.1
INFO[0002] creating directory                           
preparing testing directory: /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/testdata/ansible/memcached-operator
INFO[0002] setting domain and GVK                       
INFO[0002] creating the project                         
running: operator-sdk init --plugins ansible --group cache --version v1alpha1 --kind Memcached --domain example.com --generate-role --generate-playbook
INFO[0002] customizing the sample                       
INFO[0002] adding Ansible task and variable             
INFO[0002] adding molecule test for Ansible task        
INFO[0002] integrating project with OLM                 
running: make bundle IMG=quay.io/example/memcached-operator:v0.0.1
running: make bundle-build BUNDLE_IMG=quay.io/example/memcached-operator-bundle:v0.0.1
INFO[0003] creating Go Memcached Sample with Webhooks   
INFO[0003] starting to generate Go memcached sample with webhooks 
INFO[0003] destroying directory for Memcached with Webhooks Go samples 
running: docker rmi -f quay.io/example/memcached-operator:v0.0.1
INFO[0003] creating directory                           
preparing testing directory: /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/testdata/go/memcached-operator
INFO[0003] setting domain and GVK                       
INFO[0003] creating the project                         
running: operator-sdk init --repo github.com/example/memcached-operator --domain example.com
running: operator-sdk create api --group cache --version v1alpha1 --kind Memcached --controller true --resource true
INFO[0017] implementing the API                         
INFO[0017] implementing MemcachedStatus                 
INFO[0017] implementing the Controller                  
INFO[0017] scaffolding webhook                          
running: operator-sdk create webhook --group cache --version v1alpha1 --kind Memcached --defaulting --defaulting
INFO[0017] implementing webhooks                        
INFO[0017] uncomment kustomization.yaml to enable webhook and ca injection 
INFO[0017] integrating project with OLM                 
running: make bundle IMG=quay.io/example/memcached-operator:v0.0.1
running: make bundle-build BUNDLE_IMG=quay.io/example/memcached-operator-bundle:v0.0.1
go mod tidy
go fmt ./...
git diff --exit-code # fast-fail if generate or fix produced changes
diff --git a/testdata/ansible/memcached-operator/bundle.Dockerfile b/testdata/ansible/memcached-operator/bundle.Dockerfile
index 4f7ff821..a139efb1 100644
--- a/testdata/ansible/memcached-operator/bundle.Dockerfile
+++ b/testdata/ansible/memcached-operator/bundle.Dockerfile
@@ -9,4 +9,3 @@ LABEL operators.operatorframework.io.test.config.v1=tests/scorecard/
 LABEL operators.operatorframework.io.test.mediatype.v1=scorecard+v1
 COPY bundle/manifests /manifests/
 COPY bundle/metadata /metadata/
-COPY bundle/tests/scorecard /tests/scorecard/
diff --git a/testdata/ansible/memcached-operator/bundle/manifests/cache.example.com_memcacheds.yaml b/testdata/ansible/memcached-operator/bundle/manifests/cache.example.com_memcacheds.yaml
deleted file mode 100644
index a24f806d..00000000
--- a/testdata/ansible/memcached-operator/bundle/manifests/cache.example.com_memcacheds.yaml
+++ /dev/null
@@ -1,46 +0,0 @@
-apiVersion: apiextensions.k8s.io/v1
-kind: CustomResourceDefinition
-metadata:
-  creationTimestamp: null
-  name: memcacheds.cache.example.com
-spec:
-  group: cache.example.com
-  names:
-    kind: Memcached
-    listKind: MemcachedList
-    plural: memcacheds
-    singular: memcached
-  scope: Namespaced
-  versions:
-  - name: v1alpha1
-    schema:
-      openAPIV3Schema:
-        description: Memcached is the Schema for the memcacheds API
-        properties:
-          apiVersion:
-            description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
-            type: string
-          kind:
-            description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
-            type: string
-          metadata:
-            type: object
-          spec:
-            description: Spec defines the desired state of Memcached
-            type: object
-            x-kubernetes-preserve-unknown-fields: true
-          status:
-            description: Status defines the observed state of Memcached
-            type: object
-            x-kubernetes-preserve-unknown-fields: true
-        type: object
-    served: true
-    storage: true
-    subresources:
-      status: {}
-status:
-  acceptedNames:
-    kind: ""
-    plural: ""
-  conditions: null
-  storedVersions: null
diff --git a/testdata/ansible/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml b/testdata/ansible/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml
deleted file mode 100644
index f9e131b0..00000000
--- a/testdata/ansible/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml
+++ /dev/null
@@ -1,13 +0,0 @@
-apiVersion: monitoring.coreos.com/v1
-kind: ServiceMonitor
-metadata:
-  labels:
-    control-plane: controller-manager
-  name: memcached-operator-controller-manager-metrics-monitor
-spec:
-  endpoints:
-  - path: /metrics
-    port: https
-  selector:
-    matchLabels:
-      control-plane: controller-manager
diff --git a/testdata/ansible/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-service_v1_service.yaml b/testdata/ansible/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-service_v1_service.yaml
deleted file mode 100644
index 157a0cef..00000000
--- a/testdata/ansible/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-service_v1_service.yaml
+++ /dev/null
@@ -1,16 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
-  creationTimestamp: null
-  labels:
-    control-plane: controller-manager
-  name: memcached-operator-controller-manager-metrics-service
-spec:
-  ports:
-  - name: https
-    port: 8443
-    targetPort: https
-  selector:
-    control-plane: controller-manager
-status:
-  loadBalancer: {}
diff --git a/testdata/ansible/memcached-operator/bundle/manifests/memcached-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml b/testdata/ansible/memcached-operator/bundle/manifests/memcached-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
deleted file mode 100644
index 42a2ae6a..00000000
--- a/testdata/ansible/memcached-operator/bundle/manifests/memcached-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
+++ /dev/null
@@ -1,10 +0,0 @@
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRole
-metadata:
-  creationTimestamp: null
-  name: memcached-operator-metrics-reader
-rules:
-- nonResourceURLs:
-  - /metrics
-  verbs:
-  - get
diff --git a/testdata/ansible/memcached-operator/bundle/manifests/memcached-operator.clusterserviceversion.yaml b/testdata/ansible/memcached-operator/bundle/manifests/memcached-operator.clusterserviceversion.yaml
index 43f07252..675e2c80 100644
--- a/testdata/ansible/memcached-operator/bundle/manifests/memcached-operator.clusterserviceversion.yaml
+++ b/testdata/ansible/memcached-operator/bundle/manifests/memcached-operator.clusterserviceversion.yaml
@@ -2,29 +2,13 @@ apiVersion: operators.coreos.com/v1alpha1
 kind: ClusterServiceVersion
 metadata:
   annotations:
-    alm-examples: |-
-      [
-        {
-          "apiVersion": "cache.example.com/v1alpha1",
-          "kind": "Memcached",
-          "metadata": {
-            "name": "memcached-sample"
-          },
-          "spec": {
-            "size": 1
-          }
-        }
-      ]
+    alm-examples: '[]'
     capabilities: Basic Install
   name: memcached-operator.v0.0.1
   namespace: placeholder
 spec:
   apiservicedefinitions: {}
-  customresourcedefinitions:
-    owned:
-    - kind: Memcached
-      name: memcacheds.cache.example.com
-      version: v1alpha1
+  customresourcedefinitions: {}
   description: Memcached Operator description. TODO.
   displayName: Memcached Operator
   icon:
@@ -32,123 +16,7 @@ spec:
     mediatype: ""
   install:
     spec:
-      clusterPermissions:
-      - rules:
-        - apiGroups:
-          - ""
-          resources:
-          - secrets
-          - pods
-          - pods/exec
-          - pods/log
-          verbs:
-          - create
-          - delete
-          - get
-          - list
-          - patch
-          - update
-          - watch
-        - apiGroups:
-          - apps
-          resources:
-          - deployments
-          - daemonsets
-          - replicasets
-          - statefulsets
-          verbs:
-          - create
-          - delete
-          - get
-          - list
-          - patch
-          - update
-          - watch
-        - apiGroups:
-          - cache.example.com
-          resources:
-          - memcacheds
-          - memcacheds/status
-          - memcacheds/finalizers
-          verbs:
-          - create
-          - delete
-          - get
-          - list
-          - patch
-          - update
-          - watch
-        - apiGroups:
-          - authentication.k8s.io
-          resources:
-          - tokenreviews
-          verbs:
-          - create
-        - apiGroups:
-          - authorization.k8s.io
-          resources:
-          - subjectaccessreviews
-          verbs:
-          - create
-        serviceAccountName: default
-      deployments:
-      - name: memcached-operator-controller-manager
-        spec:
-          replicas: 1
-          selector:
-            matchLabels:
-              control-plane: controller-manager
-          strategy: {}
-          template:
-            metadata:
-              labels:
-                control-plane: controller-manager
-            spec:
-              containers:
-              - args:
-                - --secure-listen-address=0.0.0.0:8443
-                - --upstream=http://127.0.0.1:8080/
-                - --logtostderr=true
-                - --v=10
-                image: gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0
-                name: kube-rbac-proxy
-                ports:
-                - containerPort: 8443
-                  name: https
-                resources: {}
-              - args:
-                - --metrics-addr=127.0.0.1:8080
-                - --enable-leader-election
-                - --leader-election-id=memcached-operator
-                env:
-                - name: ANSIBLE_GATHERING
-                  value: explicit
-                image: quay.io/example/memcached-operator:v0.0.1
-                name: manager
-                resources: {}
-              terminationGracePeriodSeconds: 10
-      permissions:
-      - rules:
-        - apiGroups:
-          - ""
-          resources:
-          - configmaps
-          verbs:
-          - get
-          - list
-          - watch
-          - create
-          - update
-          - patch
-          - delete
-        - apiGroups:
-          - ""
-          resources:
-          - events
-          verbs:
-          - create
-          - patch
-        serviceAccountName: default
+      deployments: []
     strategy: deployment
   installModes:
   - supported: false
diff --git a/testdata/ansible/memcached-operator/bundle/tests/scorecard/config.yaml b/testdata/ansible/memcached-operator/bundle/tests/scorecard/config.yaml
deleted file mode 100644
index e39a5d88..00000000
--- a/testdata/ansible/memcached-operator/bundle/tests/scorecard/config.yaml
+++ /dev/null
@@ -1,49 +0,0 @@
-apiVersion: scorecard.operatorframework.io/v1alpha3
-kind: Configuration
-metadata:
-  name: config
-stages:
-- parallel: true
-  tests:
-  - entrypoint:
-    - scorecard-test
-    - basic-check-spec
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: basic
-      test: basic-check-spec-test
-  - entrypoint:
-    - scorecard-test
-    - olm-bundle-validation
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-bundle-validation-test
-  - entrypoint:
-    - scorecard-test
-    - olm-crds-have-validation
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-crds-have-validation-test
-  - entrypoint:
-    - scorecard-test
-    - olm-crds-have-resources
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-crds-have-resources-test
-  - entrypoint:
-    - scorecard-test
-    - olm-spec-descriptors
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-spec-descriptors-test
-  - entrypoint:
-    - scorecard-test
-    - olm-status-descriptors
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-status-descriptors-test
diff --git a/testdata/go/memcached-operator/bundle.Dockerfile b/testdata/go/memcached-operator/bundle.Dockerfile
index 4f7ff821..a139efb1 100644
--- a/testdata/go/memcached-operator/bundle.Dockerfile
+++ b/testdata/go/memcached-operator/bundle.Dockerfile
@@ -9,4 +9,3 @@ LABEL operators.operatorframework.io.test.config.v1=tests/scorecard/
 LABEL operators.operatorframework.io.test.mediatype.v1=scorecard+v1
 COPY bundle/manifests /manifests/
 COPY bundle/metadata /metadata/
-COPY bundle/tests/scorecard /tests/scorecard/
diff --git a/testdata/go/memcached-operator/bundle/manifests/cache.example.com_memcacheds.yaml b/testdata/go/memcached-operator/bundle/manifests/cache.example.com_memcacheds.yaml
deleted file mode 100644
index 1c31e731..00000000
--- a/testdata/go/memcached-operator/bundle/manifests/cache.example.com_memcacheds.yaml
+++ /dev/null
@@ -1,61 +0,0 @@
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
-  annotations:
-    controller-gen.kubebuilder.io/version: v0.3.0
-  creationTimestamp: null
-  name: memcacheds.cache.example.com
-spec:
-  group: cache.example.com
-  names:
-    kind: Memcached
-    listKind: MemcachedList
-    plural: memcacheds
-    singular: memcached
-  scope: Namespaced
-  subresources:
-    status: {}
-  validation:
-    openAPIV3Schema:
-      description: Memcached is the Schema for the memcacheds API
-      properties:
-        apiVersion:
-          description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
-          type: string
-        kind:
-          description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
-          type: string
-        metadata:
-          type: object
-        spec:
-          description: MemcachedSpec defines the desired state of Memcached
-          properties:
-            foo:
-              description: Foo is an example field of Memcached. Edit Memcached_types.go to remove/update
-              type: string
-            size:
-              description: Size defines the number of Memcached instances
-              format: int32
-              type: integer
-          type: object
-        status:
-          description: MemcachedStatus defines the observed state of Memcached
-          properties:
-            nodes:
-              description: Nodes store the name of the pods which are running Memcached instances
-              items:
-                type: string
-              type: array
-          type: object
-      type: object
-  version: v1alpha1
-  versions:
-  - name: v1alpha1
-    served: true
-    storage: true
-status:
-  acceptedNames:
-    kind: ""
-    plural: ""
-  conditions: []
-  storedVersions: []
diff --git a/testdata/go/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml b/testdata/go/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml
deleted file mode 100644
index f9e131b0..00000000
--- a/testdata/go/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml
+++ /dev/null
@@ -1,13 +0,0 @@
-apiVersion: monitoring.coreos.com/v1
-kind: ServiceMonitor
-metadata:
-  labels:
-    control-plane: controller-manager
-  name: memcached-operator-controller-manager-metrics-monitor
-spec:
-  endpoints:
-  - path: /metrics
-    port: https
-  selector:
-    matchLabels:
-      control-plane: controller-manager
diff --git a/testdata/go/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-service_v1_service.yaml b/testdata/go/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-service_v1_service.yaml
deleted file mode 100644
index 157a0cef..00000000
--- a/testdata/go/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-service_v1_service.yaml
+++ /dev/null
@@ -1,16 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
-  creationTimestamp: null
-  labels:
-    control-plane: controller-manager
-  name: memcached-operator-controller-manager-metrics-service
-spec:
-  ports:
-  - name: https
-    port: 8443
-    targetPort: https
-  selector:
-    control-plane: controller-manager
-status:
-  loadBalancer: {}
diff --git a/testdata/go/memcached-operator/bundle/manifests/memcached-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml b/testdata/go/memcached-operator/bundle/manifests/memcached-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
deleted file mode 100644
index 42a2ae6a..00000000
--- a/testdata/go/memcached-operator/bundle/manifests/memcached-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
+++ /dev/null
@@ -1,10 +0,0 @@
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRole
-metadata:
-  creationTimestamp: null
-  name: memcached-operator-metrics-reader
-rules:
-- nonResourceURLs:
-  - /metrics
-  verbs:
-  - get
diff --git a/testdata/go/memcached-operator/bundle/manifests/memcached-operator-webhook-service_v1_service.yaml b/testdata/go/memcached-operator/bundle/manifests/memcached-operator-webhook-service_v1_service.yaml
deleted file mode 100644
index 4c9ef443..00000000
--- a/testdata/go/memcached-operator/bundle/manifests/memcached-operator-webhook-service_v1_service.yaml
+++ /dev/null
@@ -1,13 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
-  creationTimestamp: null
-  name: memcached-operator-webhook-service
-spec:
-  ports:
-  - port: 443
-    targetPort: 9443
-  selector:
-    control-plane: controller-manager
-status:
-  loadBalancer: {}
diff --git a/testdata/go/memcached-operator/bundle/manifests/memcached-operator.clusterserviceversion.yaml b/testdata/go/memcached-operator/bundle/manifests/memcached-operator.clusterserviceversion.yaml
index 3f59a770..675e2c80 100644
--- a/testdata/go/memcached-operator/bundle/manifests/memcached-operator.clusterserviceversion.yaml
+++ b/testdata/go/memcached-operator/bundle/manifests/memcached-operator.clusterserviceversion.yaml
@@ -2,31 +2,13 @@ apiVersion: operators.coreos.com/v1alpha1
 kind: ClusterServiceVersion
 metadata:
   annotations:
-    alm-examples: |-
-      [
-        {
-          "apiVersion": "cache.example.com/v1alpha1",
-          "kind": "Memcached",
-          "metadata": {
-            "name": "memcached-sample"
-          },
-          "spec": {
-            "foo": "bar"
-          }
-        }
-      ]
+    alm-examples: '[]'
     capabilities: Basic Install
   name: memcached-operator.v0.0.1
   namespace: placeholder
 spec:
   apiservicedefinitions: {}
-  customresourcedefinitions:
-    owned:
-    - description: Memcached is the Schema for the memcacheds API
-      displayName: Memcached
-      kind: Memcached
-      name: memcacheds.cache.example.com
-      version: v1alpha1
+  customresourcedefinitions: {}
   description: Memcached Operator description. TODO.
   displayName: Memcached Operator
   icon:
@@ -34,149 +16,7 @@ spec:
     mediatype: ""
   install:
     spec:
-      clusterPermissions:
-      - rules:
-        - apiGroups:
-          - apps
-          resources:
-          - deployments
-          verbs:
-          - create
-          - delete
-          - get
-          - list
-          - patch
-          - update
-          - watch
-        - apiGroups:
-          - cache.example.com
-          resources:
-          - memcacheds
-          verbs:
-          - create
-          - delete
-          - get
-          - list
-          - patch
-          - update
-          - watch
-        - apiGroups:
-          - cache.example.com
-          resources:
-          - memcacheds/finalizers
-          verbs:
-          - update
-        - apiGroups:
-          - cache.example.com
-          resources:
-          - memcacheds/status
-          verbs:
-          - get
-          - patch
-          - update
-        - apiGroups:
-          - ""
-          resources:
-          - pods
-          verbs:
-          - get
-          - list
-        - apiGroups:
-          - authentication.k8s.io
-          resources:
-          - tokenreviews
-          verbs:
-          - create
-        - apiGroups:
-          - authorization.k8s.io
-          resources:
-          - subjectaccessreviews
-          verbs:
-          - create
-        serviceAccountName: default
-      deployments:
-      - name: memcached-operator-controller-manager
-        spec:
-          replicas: 1
-          selector:
-            matchLabels:
-              control-plane: controller-manager
-          strategy: {}
-          template:
-            metadata:
-              labels:
-                control-plane: controller-manager
-            spec:
-              containers:
-              - args:
-                - --secure-listen-address=0.0.0.0:8443
-                - --upstream=http://127.0.0.1:8080/
-                - --logtostderr=true
-                - --v=10
-                image: gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0
-                name: kube-rbac-proxy
-                ports:
-                - containerPort: 8443
-                  name: https
-                resources: {}
-              - args:
-                - --metrics-addr=127.0.0.1:8080
-                - --enable-leader-election
-                command:
-                - /manager
-                image: quay.io/example/memcached-operator:v0.0.1
-                name: manager
-                ports:
-                - containerPort: 9443
-                  name: webhook-server
-                  protocol: TCP
-                resources:
-                  limits:
-                    cpu: 100m
-                    memory: 30Mi
-                  requests:
-                    cpu: 100m
-                    memory: 20Mi
-                volumeMounts:
-                - mountPath: /tmp/k8s-webhook-server/serving-certs
-                  name: cert
-                  readOnly: true
-              terminationGracePeriodSeconds: 10
-              volumes:
-              - name: cert
-                secret:
-                  defaultMode: 420
-                  secretName: webhook-server-cert
-      permissions:
-      - rules:
-        - apiGroups:
-          - ""
-          resources:
-          - configmaps
-          verbs:
-          - get
-          - list
-          - watch
-          - create
-          - update
-          - patch
-          - delete
-        - apiGroups:
-          - ""
-          resources:
-          - configmaps/status
-          verbs:
-          - get
-          - update
-          - patch
-        - apiGroups:
-          - ""
-          resources:
-          - events
-          verbs:
-          - create
-          - patch
-        serviceAccountName: default
+      deployments: []
     strategy: deployment
   installModes:
   - supported: false
@@ -200,44 +40,3 @@ spec:
     name: Provider Name
     url: https://your.domain
   version: 0.0.1
-  webhookdefinitions:
-  - admissionReviewVersions:
-    - v1beta1
-    containerPort: 443
-    deploymentName: memcached-operator-controller-manager
-    failurePolicy: Fail
-    generateName: vmemcached.kb.io
-    rules:
-    - apiGroups:
-      - cache.example.com
-      apiVersions:
-      - v1alpha1
-      operations:
-      - CREATE
-      - UPDATE
-      resources:
-      - memcacheds
-    sideEffects: None
-    targetPort: 9443
-    type: ValidatingAdmissionWebhook
-    webhookPath: /validate-cache-example-com-v1alpha1-memcached
-  - admissionReviewVersions:
-    - v1beta1
-    containerPort: 443
-    deploymentName: memcached-operator-controller-manager
-    failurePolicy: Fail
-    generateName: mmemcached.kb.io
-    rules:
-    - apiGroups:
-      - cache.example.com
-      apiVersions:
-      - v1alpha1
-      operations:
-      - CREATE
-      - UPDATE
-      resources:
-      - memcacheds
-    sideEffects: None
-    targetPort: 9443
-    type: MutatingAdmissionWebhook
-    webhookPath: /mutate-cache-example-com-v1alpha1-memcached
diff --git a/testdata/go/memcached-operator/bundle/tests/scorecard/config.yaml b/testdata/go/memcached-operator/bundle/tests/scorecard/config.yaml
deleted file mode 100644
index e39a5d88..00000000
--- a/testdata/go/memcached-operator/bundle/tests/scorecard/config.yaml
+++ /dev/null
@@ -1,49 +0,0 @@
-apiVersion: scorecard.operatorframework.io/v1alpha3
-kind: Configuration
-metadata:
-  name: config
-stages:
-- parallel: true
-  tests:
-  - entrypoint:
-    - scorecard-test
-    - basic-check-spec
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: basic
-      test: basic-check-spec-test
-  - entrypoint:
-    - scorecard-test
-    - olm-bundle-validation
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-bundle-validation-test
-  - entrypoint:
-    - scorecard-test
-    - olm-crds-have-validation
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-crds-have-validation-test
-  - entrypoint:
-    - scorecard-test
-    - olm-crds-have-resources
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-crds-have-resources-test
-  - entrypoint:
-    - scorecard-test
-    - olm-spec-descriptors
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-spec-descriptors-test
-  - entrypoint:
-    - scorecard-test
-    - olm-status-descriptors
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-status-descriptors-test
diff --git a/testdata/helm/memcached-operator/bundle.Dockerfile b/testdata/helm/memcached-operator/bundle.Dockerfile
index 4f7ff821..a139efb1 100644
--- a/testdata/helm/memcached-operator/bundle.Dockerfile
+++ b/testdata/helm/memcached-operator/bundle.Dockerfile
@@ -9,4 +9,3 @@ LABEL operators.operatorframework.io.test.config.v1=tests/scorecard/
 LABEL operators.operatorframework.io.test.mediatype.v1=scorecard+v1
 COPY bundle/manifests /manifests/
 COPY bundle/metadata /metadata/
-COPY bundle/tests/scorecard /tests/scorecard/
diff --git a/testdata/helm/memcached-operator/bundle/manifests/cache.example.com_memcacheds.yaml b/testdata/helm/memcached-operator/bundle/manifests/cache.example.com_memcacheds.yaml
deleted file mode 100644
index a24f806d..00000000
--- a/testdata/helm/memcached-operator/bundle/manifests/cache.example.com_memcacheds.yaml
+++ /dev/null
@@ -1,46 +0,0 @@
-apiVersion: apiextensions.k8s.io/v1
-kind: CustomResourceDefinition
-metadata:
-  creationTimestamp: null
-  name: memcacheds.cache.example.com
-spec:
-  group: cache.example.com
-  names:
-    kind: Memcached
-    listKind: MemcachedList
-    plural: memcacheds
-    singular: memcached
-  scope: Namespaced
-  versions:
-  - name: v1alpha1
-    schema:
-      openAPIV3Schema:
-        description: Memcached is the Schema for the memcacheds API
-        properties:
-          apiVersion:
-            description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
-            type: string
-          kind:
-            description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
-            type: string
-          metadata:
-            type: object
-          spec:
-            description: Spec defines the desired state of Memcached
-            type: object
-            x-kubernetes-preserve-unknown-fields: true
-          status:
-            description: Status defines the observed state of Memcached
-            type: object
-            x-kubernetes-preserve-unknown-fields: true
-        type: object
-    served: true
-    storage: true
-    subresources:
-      status: {}
-status:
-  acceptedNames:
-    kind: ""
-    plural: ""
-  conditions: null
-  storedVersions: null
diff --git a/testdata/helm/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml b/testdata/helm/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml
deleted file mode 100644
index f9e131b0..00000000
--- a/testdata/helm/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml
+++ /dev/null
@@ -1,13 +0,0 @@
-apiVersion: monitoring.coreos.com/v1
-kind: ServiceMonitor
-metadata:
-  labels:
-    control-plane: controller-manager
-  name: memcached-operator-controller-manager-metrics-monitor
-spec:
-  endpoints:
-  - path: /metrics
-    port: https
-  selector:
-    matchLabels:
-      control-plane: controller-manager
diff --git a/testdata/helm/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-service_v1_service.yaml b/testdata/helm/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-service_v1_service.yaml
deleted file mode 100644
index 157a0cef..00000000
--- a/testdata/helm/memcached-operator/bundle/manifests/memcached-operator-controller-manager-metrics-service_v1_service.yaml
+++ /dev/null
@@ -1,16 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
-  creationTimestamp: null
-  labels:
-    control-plane: controller-manager
-  name: memcached-operator-controller-manager-metrics-service
-spec:
-  ports:
-  - name: https
-    port: 8443
-    targetPort: https
-  selector:
-    control-plane: controller-manager
-status:
-  loadBalancer: {}
diff --git a/testdata/helm/memcached-operator/bundle/manifests/memcached-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml b/testdata/helm/memcached-operator/bundle/manifests/memcached-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
deleted file mode 100644
index 42a2ae6a..00000000
--- a/testdata/helm/memcached-operator/bundle/manifests/memcached-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
+++ /dev/null
@@ -1,10 +0,0 @@
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRole
-metadata:
-  creationTimestamp: null
-  name: memcached-operator-metrics-reader
-rules:
-- nonResourceURLs:
-  - /metrics
-  verbs:
-  - get
diff --git a/testdata/helm/memcached-operator/bundle/manifests/memcached-operator.clusterserviceversion.yaml b/testdata/helm/memcached-operator/bundle/manifests/memcached-operator.clusterserviceversion.yaml
index eba15872..675e2c80 100644
--- a/testdata/helm/memcached-operator/bundle/manifests/memcached-operator.clusterserviceversion.yaml
+++ b/testdata/helm/memcached-operator/bundle/manifests/memcached-operator.clusterserviceversion.yaml
@@ -2,69 +2,13 @@ apiVersion: operators.coreos.com/v1alpha1
 kind: ClusterServiceVersion
 metadata:
   annotations:
-    alm-examples: |-
-      [
-        {
-          "apiVersion": "cache.example.com/v1alpha1",
-          "kind": "Memcached",
-          "metadata": {
-            "name": "memcached-sample"
-          },
-          "spec": {
-            "AntiAffinity": "soft",
-            "affinity": {},
-            "extraContainers": "",
-            "extraVolumes": "",
-            "image": "memcached:1.5.20",
-            "kind": "StatefulSet",
-            "memcached": {
-              "extendedOptions": "modern",
-              "extraArgs": [],
-              "maxItemMemory": 64,
-              "verbosity": "v"
-            },
-            "metrics": {
-              "enabled": false,
-              "image": "quay.io/prometheus/memcached-exporter:v0.6.0",
-              "resources": {},
-              "serviceMonitor": {
-                "enabled": false,
-                "interval": "15s"
-              }
-            },
-            "nodeSelector": {},
-            "pdbMinAvailable": 2,
-            "podAnnotations": {},
-            "replicaCount": 3,
-            "resources": {
-              "requests": {
-                "cpu": "50m",
-                "memory": "64Mi"
-              }
-            },
-            "securityContext": {
-              "enabled": true,
-              "fsGroup": 1001,
-              "runAsUser": 1001
-            },
-            "serviceAnnotations": {},
-            "tolerations": {},
-            "updateStrategy": {
-              "type": "RollingUpdate"
-            }
-          }
-        }
-      ]
+    alm-examples: '[]'
     capabilities: Basic Install
   name: memcached-operator.v0.0.1
   namespace: placeholder
 spec:
   apiservicedefinitions: {}
-  customresourcedefinitions:
-    owned:
-    - kind: Memcached
-      name: memcacheds.cache.example.com
-      version: v1alpha1
+  customresourcedefinitions: {}
   description: Memcached Operator description. TODO.
   displayName: Memcached Operator
   icon:
@@ -72,174 +16,7 @@ spec:
     mediatype: ""
   install:
     spec:
-      clusterPermissions:
-      - rules:
-        - apiGroups:
-          - ""
-          resources:
-          - namespaces
-          verbs:
-          - get
-        - apiGroups:
-          - ""
-          resources:
-          - secrets
-          verbs:
-          - '*'
-        - apiGroups:
-          - ""
-          resources:
-          - events
-          verbs:
-          - create
-        - apiGroups:
-          - cache.example.com
-          resources:
-          - memcacheds
-          - memcacheds/status
-          - memcacheds/finalizers
-          verbs:
-          - create
-          - delete
-          - get
-          - list
-          - patch
-          - update
-          - watch
-        - apiGroups:
-          - ""
-          resources:
-          - pods
-          - services
-          - services/finalizers
-          - endpoints
-          - persistentvolumeclaims
-          - events
-          - configmaps
-          - secrets
-          verbs:
-          - create
-          - delete
-          - get
-          - list
-          - patch
-          - update
-          - watch
-        - apiGroups:
-          - apps
-          resources:
-          - deployments
-          - daemonsets
-          - replicasets
-          - statefulsets
-          verbs:
-          - create
-          - delete
-          - get
-          - list
-          - patch
-          - update
-          - watch
-        - apiGroups:
-          - policy
-          resources:
-          - events
-          - poddisruptionbudgets
-          verbs:
-          - create
-          - delete
-          - get
-          - list
-          - patch
-          - update
-          - watch
-        - apiGroups:
-          - ""
-          resources:
-          - serviceaccounts
-          - services
-          verbs:
-          - create
-          - delete
-          - get
-          - list
-          - patch
-          - update
-          - watch
-        - apiGroups:
-          - authentication.k8s.io
-          resources:
-          - tokenreviews
-          verbs:
-          - create
-        - apiGroups:
-          - authorization.k8s.io
-          resources:
-          - subjectaccessreviews
-          verbs:
-          - create
-        serviceAccountName: default
-      deployments:
-      - name: memcached-operator-controller-manager
-        spec:
-          replicas: 1
-          selector:
-            matchLabels:
-              control-plane: controller-manager
-          strategy: {}
-          template:
-            metadata:
-              labels:
-                control-plane: controller-manager
-            spec:
-              containers:
-              - args:
-                - --secure-listen-address=0.0.0.0:8443
-                - --upstream=http://127.0.0.1:8080/
-                - --logtostderr=true
-                - --v=10
-                image: gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0
-                name: kube-rbac-proxy
-                ports:
-                - containerPort: 8443
-                  name: https
-                resources: {}
-              - args:
-                - --metrics-addr=127.0.0.1:8080
-                - --enable-leader-election
-                - --leader-election-id=memcached-operator
-                image: quay.io/example/memcached-operator:v0.0.1
-                name: manager
-                resources:
-                  limits:
-                    cpu: 100m
-                    memory: 90Mi
-                  requests:
-                    cpu: 100m
-                    memory: 60Mi
-              terminationGracePeriodSeconds: 10
-      permissions:
-      - rules:
-        - apiGroups:
-          - ""
-          resources:
-          - configmaps
-          verbs:
-          - get
-          - list
-          - watch
-          - create
-          - update
-          - patch
-          - delete
-        - apiGroups:
-          - ""
-          resources:
-          - events
-          verbs:
-          - create
-          - patch
-        serviceAccountName: default
+      deployments: []
     strategy: deployment
   installModes:
   - supported: false
diff --git a/testdata/helm/memcached-operator/bundle/tests/scorecard/config.yaml b/testdata/helm/memcached-operator/bundle/tests/scorecard/config.yaml
deleted file mode 100644
index e39a5d88..00000000
--- a/testdata/helm/memcached-operator/bundle/tests/scorecard/config.yaml
+++ /dev/null
@@ -1,49 +0,0 @@
-apiVersion: scorecard.operatorframework.io/v1alpha3
-kind: Configuration
-metadata:
-  name: config
-stages:
-- parallel: true
-  tests:
-  - entrypoint:
-    - scorecard-test
-    - basic-check-spec
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: basic
-      test: basic-check-spec-test
-  - entrypoint:
-    - scorecard-test
-    - olm-bundle-validation
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-bundle-validation-test
-  - entrypoint:
-    - scorecard-test
-    - olm-crds-have-validation
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-crds-have-validation-test
-  - entrypoint:
-    - scorecard-test
-    - olm-crds-have-resources
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-crds-have-resources-test
-  - entrypoint:
-    - scorecard-test
-    - olm-spec-descriptors
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-spec-descriptors-test
-  - entrypoint:
-    - scorecard-test
-    - olm-status-descriptors
-    image: quay.io/operator-framework/scorecard-test:v1.2.0
-    labels:
-      suite: olm
-      test: olm-status-descriptors-test
make: *** [test-sanity] Error 1
jberkhahn commented 4 years ago

Eric helped me trouble shoot a bit (I had some super weird version of kustomize installed), and now this is where I'm failing

Output

jberkhahn@Purgatory> make test-all
go build -gcflags "all=-trimpath=/Users/jberkhahn/workspace/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/jberkhahn/workspace/go/src/github.com/operator-framework" -ldflags " -X 'github.com/operator-framework/operator-sdk/internal/version.Version=v1.2.0+git' -X 'github.com/operator-framework/operator-sdk/internal/version.GitVersion=v1.2.0-4-gf20ea9e9' -X 'github.com/operator-framework/operator-sdk/internal/version.GitCommit=f20ea9e9496be3dab34a3652ec7cc7016ef1ccf2' -X 'github.com/operator-framework/operator-sdk/internal/version.KubernetesVersion=v1.18.8' -X 'github.com/operator-framework/operator-sdk/internal/version.ImageVersion=v1.2.0' "  -o build ./cmd/{operator-sdk,ansible-operator,helm-operator}
go run ./hack/generate/cncf-maintainers/main.go
go run ./hack/generate/cli-doc/gen-cli-doc.go
go run ./hack/generate/samples/generate_testdata.go
INFO[0000] using the path: (/Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/testdata) 
INFO[0000] creating Helm Memcached Sample               
INFO[0000] destroying directory for memcached helm samples 
running: docker rmi -f quay.io/example/memcached-operator:v0.0.1
INFO[0000] creating directory                           
preparing testing directory: /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/testdata/helm/memcached-operator
INFO[0000] setting domain and GVK                       
INFO[0000] creating the project                         
running: operator-sdk init --plugins helm --domain example.com
INFO[0001] handling work path to get helm chart mock data 
INFO[0001] using the helm chart in: (/Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/hack/generate/samples/internal/helm/testdata/memcached-0.0.1.tgz) 
running: operator-sdk create api --group cache --version v1alpha1 --kind Memcached --helm-chart /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/hack/generate/samples/internal/helm/testdata/memcached-0.0.1.tgz
INFO[0001] customizing the sample                       
INFO[0001] enabling prometheus metrics                  
INFO[0001] adding customized roles                      
INFO[0001] integrating project with OLM                 
running: make bundle IMG=quay.io/example/memcached-operator:v0.0.1
running: make bundle-build BUNDLE_IMG=quay.io/example/memcached-operator-bundle:v0.0.1
INFO[0001] creating Ansible Memcached Sample            
INFO[0001] destroying directory for memcached Ansible samples 
running: docker rmi -f quay.io/example/memcached-operator:v0.0.1
INFO[0002] creating directory                           
preparing testing directory: /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/testdata/ansible/memcached-operator
INFO[0002] setting domain and GVK                       
INFO[0002] creating the project                         
running: operator-sdk init --plugins ansible --group cache --version v1alpha1 --kind Memcached --domain example.com --generate-role --generate-playbook
INFO[0002] customizing the sample                       
INFO[0002] adding Ansible task and variable             
INFO[0002] adding molecule test for Ansible task        
INFO[0002] integrating project with OLM                 
running: make bundle IMG=quay.io/example/memcached-operator:v0.0.1
running: make bundle-build BUNDLE_IMG=quay.io/example/memcached-operator-bundle:v0.0.1
INFO[0003] creating Go Memcached Sample with Webhooks   
INFO[0003] starting to generate Go memcached sample with webhooks 
INFO[0003] destroying directory for Memcached with Webhooks Go samples 
running: docker rmi -f quay.io/example/memcached-operator:v0.0.1
INFO[0003] creating directory                           
preparing testing directory: /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/testdata/go/memcached-operator
INFO[0003] setting domain and GVK                       
INFO[0003] creating the project                         
running: operator-sdk init --repo github.com/example/memcached-operator --domain example.com
running: operator-sdk create api --group cache --version v1alpha1 --kind Memcached --controller true --resource true
INFO[0015] implementing the API                         
INFO[0015] implementing MemcachedStatus                 
INFO[0015] implementing the Controller                  
INFO[0015] scaffolding webhook                          
running: operator-sdk create webhook --group cache --version v1alpha1 --kind Memcached --defaulting --defaulting
INFO[0015] implementing webhooks                        
INFO[0015] uncomment kustomization.yaml to enable webhook and ca injection 
INFO[0015] integrating project with OLM                 
running: make bundle IMG=quay.io/example/memcached-operator:v0.0.1
running: make bundle-build BUNDLE_IMG=quay.io/example/memcached-operator-bundle:v0.0.1
go mod tidy
go fmt ./...
git diff --exit-code # fast-fail if generate or fix produced changes
./hack/check-license.sh
Checking for license header...
./hack/check-error-log-msg-format.sh
Checking format of error and log messages...
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
    [-e pattern] [-f file] [--binary-files=value] [--color=when]
    [--context[=num]] [--directories=action] [--label] [--line-buffered]
    [--null] [pattern] [file ...]
go run ./release/changelog/gen-changelog.go -validate-only
WARN[0000] no entries found                             
go vet ./...
tools/scripts/fetch golangci-lint 1.31.0 && tools/bin/golangci-lint run
git diff --exit-code # diff again to ensure other checks don't change repo
go test -coverprofile=coverage.out -covermode=count -short github.com/operator-framework/operator-sdk/cmd/ansible-operator github.com/operator-framework/operator-sdk/cmd/helm-operator github.com/operator-framework/operator-sdk/cmd/operator-sdk github.com/operator-framework/operator-sdk/hack/generate/cli-doc github.com/operator-framework/operator-sdk/hack/generate/cncf-maintainers github.com/operator-framework/operator-sdk/hack/generate/samples github.com/operator-framework/operator-sdk/hack/generate/samples/internal/ansible github.com/operator-framework/operator-sdk/hack/generate/samples/internal/go github.com/operator-framework/operator-sdk/hack/generate/samples/internal/helm github.com/operator-framework/operator-sdk/hack/generate/samples/internal/pkg github.com/operator-framework/operator-sdk/hack/generate/samples/molecule github.com/operator-framework/operator-sdk/images/custom-scorecard-tests github.com/operator-framework/operator-sdk/images/scorecard-test github.com/operator-framework/operator-sdk/images/scorecard-test-kuttl github.com/operator-framework/operator-sdk/internal/annotations/metrics github.com/operator-framework/operator-sdk/internal/annotations/scorecard github.com/operator-framework/operator-sdk/internal/ansible/controller github.com/operator-framework/operator-sdk/internal/ansible/controller/status github.com/operator-framework/operator-sdk/internal/ansible/events github.com/operator-framework/operator-sdk/internal/ansible/flags github.com/operator-framework/operator-sdk/internal/ansible/metrics github.com/operator-framework/operator-sdk/internal/ansible/paramconv github.com/operator-framework/operator-sdk/internal/ansible/predicate github.com/operator-framework/operator-sdk/internal/ansible/proxy github.com/operator-framework/operator-sdk/internal/ansible/proxy/controllermap github.com/operator-framework/operator-sdk/internal/ansible/proxy/kubeconfig github.com/operator-framework/operator-sdk/internal/ansible/proxy/requestfactory github.com/operator-framework/operator-sdk/internal/ansible/runner github.com/operator-framework/operator-sdk/internal/ansible/runner/eventapi github.com/operator-framework/operator-sdk/internal/ansible/runner/fake github.com/operator-framework/operator-sdk/internal/ansible/runner/internal/inputdir github.com/operator-framework/operator-sdk/internal/ansible/watches github.com/operator-framework/operator-sdk/internal/bindata/olm github.com/operator-framework/operator-sdk/internal/cmd/ansible-operator/run github.com/operator-framework/operator-sdk/internal/cmd/ansible-operator/version github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/run github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/version github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/bundle github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/bundle/validate github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/bundle/validate/internal github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/cleanup github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/cli github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/completion github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/bundle github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/internal github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/kustomize github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/packagemanifests github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/olm github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/run github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/run/bundle github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/run/packagemanifests github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/scorecard github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/version github.com/operator-framework/operator-sdk/internal/flags github.com/operator-framework/operator-sdk/internal/generate/clusterserviceversion github.com/operator-framework/operator-sdk/internal/generate/clusterserviceversion/bases github.com/operator-framework/operator-sdk/internal/generate/clusterserviceversion/bases/definitions github.com/operator-framework/operator-sdk/internal/generate/collector github.com/operator-framework/operator-sdk/internal/generate/internal github.com/operator-framework/operator-sdk/internal/generate/packagemanifest github.com/operator-framework/operator-sdk/internal/generate/packagemanifest/bases github.com/operator-framework/operator-sdk/internal/helm/client github.com/operator-framework/operator-sdk/internal/helm/controller github.com/operator-framework/operator-sdk/internal/helm/flags github.com/operator-framework/operator-sdk/internal/helm/internal/diff github.com/operator-framework/operator-sdk/internal/helm/internal/types github.com/operator-framework/operator-sdk/internal/helm/release github.com/operator-framework/operator-sdk/internal/helm/watches github.com/operator-framework/operator-sdk/internal/kubebuilder/cmdutil github.com/operator-framework/operator-sdk/internal/kubebuilder/filesystem github.com/operator-framework/operator-sdk/internal/kubebuilder/machinery github.com/operator-framework/operator-sdk/internal/markers github.com/operator-framework/operator-sdk/internal/olm/client github.com/operator-framework/operator-sdk/internal/olm/installer github.com/operator-framework/operator-sdk/internal/olm/operator github.com/operator-framework/operator-sdk/internal/olm/operator/bundle github.com/operator-framework/operator-sdk/internal/olm/operator/packagemanifests github.com/operator-framework/operator-sdk/internal/olm/operator/registry github.com/operator-framework/operator-sdk/internal/olm/operator/registry/configmap github.com/operator-framework/operator-sdk/internal/olm/operator/registry/index github.com/operator-framework/operator-sdk/internal/plugins github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1 github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/constants github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/crd github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/kdefault github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/manager github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/prometheus github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/rbac github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/samples github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/testing github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/testing/pullpolicy github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/molecule/mdefault github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/molecule/mkind github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/playbooks github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/roles github.com/operator-framework/operator-sdk/internal/plugins/envtest github.com/operator-framework/operator-sdk/internal/plugins/golang/v2 github.com/operator-framework/operator-sdk/internal/plugins/helm/v1 github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/chartutil github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/crd github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/kdefault github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/manager github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/prometheus github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/rbac github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/samples github.com/operator-framework/operator-sdk/internal/plugins/manifests github.com/operator-framework/operator-sdk/internal/plugins/scorecard github.com/operator-framework/operator-sdk/internal/plugins/util/kustomize github.com/operator-framework/operator-sdk/internal/registry github.com/operator-framework/operator-sdk/internal/scorecard github.com/operator-framework/operator-sdk/internal/scorecard/tests github.com/operator-framework/operator-sdk/internal/testutils github.com/operator-framework/operator-sdk/internal/util/k8sutil github.com/operator-framework/operator-sdk/internal/util/projutil github.com/operator-framework/operator-sdk/internal/version github.com/operator-framework/operator-sdk/release/changelog github.com/operator-framework/operator-sdk/release/changelog/internal
?       github.com/operator-framework/operator-sdk/cmd/ansible-operator [no test files]
?       github.com/operator-framework/operator-sdk/cmd/helm-operator    [no test files]
?       github.com/operator-framework/operator-sdk/cmd/operator-sdk [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/cli-doc    [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/cncf-maintainers   [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/samples    [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/samples/internal/ansible   [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/samples/internal/go    [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/samples/internal/helm  [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/samples/internal/pkg   [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/samples/molecule   [no test files]
?       github.com/operator-framework/operator-sdk/images/custom-scorecard-tests    [no test files]
?       github.com/operator-framework/operator-sdk/images/scorecard-test    [no test files]
?       github.com/operator-framework/operator-sdk/images/scorecard-test-kuttl  [no test files]
ok      github.com/operator-framework/operator-sdk/internal/annotations/metrics 0.695s  coverage: 0.0% of statements [no tests to run]
?       github.com/operator-framework/operator-sdk/internal/annotations/scorecard   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/ansible/controller  1.122s  coverage: 58.4% of statements
ok      github.com/operator-framework/operator-sdk/internal/ansible/controller/status   0.852s  coverage: 22.2% of statements
?       github.com/operator-framework/operator-sdk/internal/ansible/events  [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/flags   [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/metrics [no test files]
ok      github.com/operator-framework/operator-sdk/internal/ansible/paramconv   0.205s  coverage: 85.7% of statements
?       github.com/operator-framework/operator-sdk/internal/ansible/predicate   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/ansible/proxy   0.938s  coverage: 0.0% of statements
?       github.com/operator-framework/operator-sdk/internal/ansible/proxy/controllermap [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/proxy/kubeconfig    [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/proxy/requestfactory    [no test files]
ok      github.com/operator-framework/operator-sdk/internal/ansible/runner  1.039s  coverage: 44.4% of statements
?       github.com/operator-framework/operator-sdk/internal/ansible/runner/eventapi [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/runner/fake [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/runner/internal/inputdir    [no test files]
ok      github.com/operator-framework/operator-sdk/internal/ansible/watches 0.833s  coverage: 92.0% of statements
?       github.com/operator-framework/operator-sdk/internal/bindata/olm [no test files]
?       github.com/operator-framework/operator-sdk/internal/cmd/ansible-operator/run    [no test files]
ok      github.com/operator-framework/operator-sdk/internal/cmd/ansible-operator/version    0.840s  coverage: 85.7% of statements
?       github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/run   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/version   1.271s  coverage: 85.7% of statements
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/bundle 1.034s  coverage: 100.0% of statements
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/bundle/validate    1.739s  coverage: 24.1% of statements
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/bundle/validate/internal   1.468s  coverage: 83.3% of statements
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/cleanup    [no test files]
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/cli    [no test files]
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/completion 1.106s  coverage: 50.0% of statements
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate   [no test files]
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/bundle    [no test files]
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/internal  2.407s  coverage: 19.8% of statements
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/kustomize [no test files]
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/packagemanifests  [no test files]
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/olm    1.904s  coverage: 69.0% of statements
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/run    1.380s  coverage: 100.0% of statements
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/run/bundle [no test files]
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/run/packagemanifests   2.198s  coverage: 47.1% of statements
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/scorecard  2.864s  coverage: 19.2% of statements
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/version    3.007s  coverage: 75.0% of statements
?       github.com/operator-framework/operator-sdk/internal/flags   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/generate/clusterserviceversion  16.595s coverage: 86.6% of statements
ok      github.com/operator-framework/operator-sdk/internal/generate/clusterserviceversion/bases    4.440s  coverage: 50.6% of statements
ok      github.com/operator-framework/operator-sdk/internal/generate/clusterserviceversion/bases/definitions    12.472s coverage: 82.7% of statements
ok      github.com/operator-framework/operator-sdk/internal/generate/collector  1.975s  coverage: 28.7% of statements
?       github.com/operator-framework/operator-sdk/internal/generate/internal   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/generate/packagemanifest    2.115s  coverage: 83.6% of statements
?       github.com/operator-framework/operator-sdk/internal/generate/packagemanifest/bases  [no test files]
?       github.com/operator-framework/operator-sdk/internal/helm/client [no test files]
ok      github.com/operator-framework/operator-sdk/internal/helm/controller 1.561s  coverage: 4.1% of statements
?       github.com/operator-framework/operator-sdk/internal/helm/flags  [no test files]
?       github.com/operator-framework/operator-sdk/internal/helm/internal/diff  [no test files]
ok      github.com/operator-framework/operator-sdk/internal/helm/internal/types 1.246s  coverage: 73.3% of statements
ok      github.com/operator-framework/operator-sdk/internal/helm/release    2.119s  coverage: 12.8% of statements
ok      github.com/operator-framework/operator-sdk/internal/helm/watches    1.409s  coverage: 84.4% of statements
?       github.com/operator-framework/operator-sdk/internal/kubebuilder/cmdutil [no test files]
ok      github.com/operator-framework/operator-sdk/internal/kubebuilder/filesystem  0.781s  coverage: 72.1% of statements
ok      github.com/operator-framework/operator-sdk/internal/kubebuilder/machinery   0.939s  coverage: 93.6% of statements
?       github.com/operator-framework/operator-sdk/internal/markers [no test files]
ok      github.com/operator-framework/operator-sdk/internal/olm/client  2.584s  coverage: 17.7% of statements
ok      github.com/operator-framework/operator-sdk/internal/olm/installer   1.350s  coverage: 2.7% of statements
ok      github.com/operator-framework/operator-sdk/internal/olm/operator    1.637s  coverage: 2.9% of statements
?       github.com/operator-framework/operator-sdk/internal/olm/operator/bundle [no test files]
?       github.com/operator-framework/operator-sdk/internal/olm/operator/packagemanifests   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/olm/operator/registry   1.430s  coverage: 50.6% of statements
ok      github.com/operator-framework/operator-sdk/internal/olm/operator/registry/configmap 1.116s  coverage: 45.9% of statements
ok      github.com/operator-framework/operator-sdk/internal/olm/operator/registry/index 2.012s  coverage: 70.3% of statements
?       github.com/operator-framework/operator-sdk/internal/plugins [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1  [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/constants    [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds    [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/crd  [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/kdefault [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/manager  [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/prometheus   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/rbac [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/samples  [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/testing  [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/testing/pullpolicy   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/molecule/mdefault   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/molecule/mkind  [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/playbooks   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/roles   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/envtest [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/golang/v2   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1 [no test files]
ok      github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/chartutil   2.793s  coverage: 85.5% of statements
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates    [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/crd [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/kdefault    [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/manager [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/prometheus  [no test files]
ok      github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/rbac    3.889s  coverage: 47.2% of statements
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/samples [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/manifests   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/scorecard   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/util/kustomize  [no test files]
ok      github.com/operator-framework/operator-sdk/internal/registry    3.057s  coverage: 16.8% of statements
ok      github.com/operator-framework/operator-sdk/internal/scorecard   1.482s  coverage: 43.9% of statements
ok      github.com/operator-framework/operator-sdk/internal/scorecard/tests 0.505s  coverage: 80.6% of statements
?       github.com/operator-framework/operator-sdk/internal/testutils   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/util/k8sutil    0.385s  coverage: 27.4% of statements
ok      github.com/operator-framework/operator-sdk/internal/util/projutil   0.660s  coverage: 45.0% of statements
?       github.com/operator-framework/operator-sdk/internal/version [no test files]
?       github.com/operator-framework/operator-sdk/release/changelog    [no test files]
ok      github.com/operator-framework/operator-sdk/release/changelog/internal   0.722s  coverage: 90.5% of statements
git submodule update --init --recursive website/
./hack/check-links.sh
Building the website
sdk-html
Building sites … 
  Replace Autoprefixer browsers option to Browserslist config.
  Use browserslist key in package.json or .browserslistrc file.

  Using browsers option can cause errors. Browserslist config 
  can be used for Babel, Autoprefixer, postcss-normalize and other tools.

  If you really need to use option, rename it to overrideBrowserslist.

  Learn more at:
  https://github.com/browserslist/browserslist#readme
  https://twitter.com/browserslist

WARN 2020/11/11 01:00:00 Page.URL is deprecated and will be removed in a future release. Use .Permalink or .RelPermalink. If what you want is the front matter URL value, use .Params.url

                   | EN   
-------------------+------
  Pages            | 125  
  Paginator pages  |   0  
  Non-page files   |  21  
  Static files     |  43  
  Processed images |   0  
  Aliases          |   0  
  Sitemaps         |   1  
  Cleaned          |   0  

Total in 8753 ms
Checking links
Running ["ImageCheck", "LinkCheck", "ScriptCheck"] on ["/target"] on *.html... 

Checking 412 external links...
Ran on 106 files!

HTML-Proofer finished successfully.
sdk-html
exiting @ Tue Nov 10 16:59:51 PST 2020
tools/scripts/fetch kind 0.9.0
kind missing or not version '0.9.0', downloading...
curl: (22) The requested URL returned error: 429 too many requests
make: *** [test-e2e-setup] Error 22

I have kind 0.9.0 installed though, even stuck it in go/bin as per Eric's comment that kubebuilder maybe isn't smart enough to look for binaries installed locally instead of in the go tree.

joelanford commented 4 years ago

@jberkhahn the ./tools/scripts/fetch script downloads and expects to find all binaries in ./tools/bin to ensure everything is self-contained. Copy kind to ./tools/bin and see if that gets you past this.

jberkhahn commented 4 years ago

that fixed it.

next speed bump I run into was not having something called 'molecule' installed. I assume that's this and installed whatever version 'pip install molecule' grabbed. Would be nice for that to be written down somewhere or just have the script install it?

I'm now seeing a similar error to Camila's, i.e. timeouts in the ansible e2e tests, although it doesn't look like it's deterministic. Looks different the 3 different times I ran it. output for the last run:

Output

jberkhahn@Purgatory> make test-all
go build -gcflags "all=-trimpath=/Users/jberkhahn/workspace/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/jberkhahn/workspace/go/src/github.com/operator-framework" -ldflags " -X 'github.com/operator-framework/operator-sdk/internal/version.Version=v1.2.0+git' -X 'github.com/operator-framework/operator-sdk/internal/version.GitVersion=v1.2.0-4-gf20ea9e9' -X 'github.com/operator-framework/operator-sdk/internal/version.GitCommit=f20ea9e9496be3dab34a3652ec7cc7016ef1ccf2' -X 'github.com/operator-framework/operator-sdk/internal/version.KubernetesVersion=v1.18.8' -X 'github.com/operator-framework/operator-sdk/internal/version.ImageVersion=v1.2.0' "  -o build ./cmd/{operator-sdk,ansible-operator,helm-operator}
go run ./hack/generate/cncf-maintainers/main.go
go run ./hack/generate/cli-doc/gen-cli-doc.go
go run ./hack/generate/samples/generate_testdata.go
INFO[0000] using the path: (/Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/testdata) 
INFO[0000] creating Helm Memcached Sample               
INFO[0000] destroying directory for memcached helm samples 
running: docker rmi -f quay.io/example/memcached-operator:v0.0.1
INFO[0000] creating directory                           
preparing testing directory: /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/testdata/helm/memcached-operator
INFO[0000] setting domain and GVK                       
INFO[0000] creating the project                         
running: operator-sdk init --plugins helm --domain example.com
INFO[0001] handling work path to get helm chart mock data 
INFO[0001] using the helm chart in: (/Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/hack/generate/samples/internal/helm/testdata/memcached-0.0.1.tgz) 
running: operator-sdk create api --group cache --version v1alpha1 --kind Memcached --helm-chart /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/hack/generate/samples/internal/helm/testdata/memcached-0.0.1.tgz
INFO[0001] customizing the sample                       
INFO[0001] enabling prometheus metrics                  
INFO[0001] adding customized roles                      
INFO[0001] integrating project with OLM                 
running: make bundle IMG=quay.io/example/memcached-operator:v0.0.1
running: make bundle-build BUNDLE_IMG=quay.io/example/memcached-operator-bundle:v0.0.1
INFO[0002] creating Ansible Memcached Sample            
INFO[0002] destroying directory for memcached Ansible samples 
running: docker rmi -f quay.io/example/memcached-operator:v0.0.1
INFO[0002] creating directory                           
preparing testing directory: /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/testdata/ansible/memcached-operator
INFO[0002] setting domain and GVK                       
INFO[0002] creating the project                         
running: operator-sdk init --plugins ansible --group cache --version v1alpha1 --kind Memcached --domain example.com --generate-role --generate-playbook
INFO[0002] customizing the sample                       
INFO[0002] adding Ansible task and variable             
INFO[0002] adding molecule test for Ansible task        
INFO[0002] integrating project with OLM                 
running: make bundle IMG=quay.io/example/memcached-operator:v0.0.1
running: make bundle-build BUNDLE_IMG=quay.io/example/memcached-operator-bundle:v0.0.1
INFO[0003] creating Go Memcached Sample with Webhooks   
INFO[0003] starting to generate Go memcached sample with webhooks 
INFO[0003] destroying directory for Memcached with Webhooks Go samples 
running: docker rmi -f quay.io/example/memcached-operator:v0.0.1
INFO[0003] creating directory                           
preparing testing directory: /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/testdata/go/memcached-operator
INFO[0003] setting domain and GVK                       
INFO[0003] creating the project                         
running: operator-sdk init --repo github.com/example/memcached-operator --domain example.com
running: operator-sdk create api --group cache --version v1alpha1 --kind Memcached --controller true --resource true
INFO[0017] implementing the API                         
INFO[0017] implementing MemcachedStatus                 
INFO[0017] implementing the Controller                  
INFO[0017] scaffolding webhook                          
running: operator-sdk create webhook --group cache --version v1alpha1 --kind Memcached --defaulting --defaulting
INFO[0017] implementing webhooks                        
INFO[0017] uncomment kustomization.yaml to enable webhook and ca injection 
INFO[0017] integrating project with OLM                 
running: make bundle IMG=quay.io/example/memcached-operator:v0.0.1
running: make bundle-build BUNDLE_IMG=quay.io/example/memcached-operator-bundle:v0.0.1
go mod tidy
go fmt ./...
git diff --exit-code # fast-fail if generate or fix produced changes
./hack/check-license.sh
Checking for license header...
./hack/check-error-log-msg-format.sh
Checking format of error and log messages...
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
    [-e pattern] [-f file] [--binary-files=value] [--color=when]
    [--context[=num]] [--directories=action] [--label] [--line-buffered]
    [--null] [pattern] [file ...]
go run ./release/changelog/gen-changelog.go -validate-only
WARN[0000] no entries found                             
go vet ./...
tools/scripts/fetch golangci-lint 1.31.0 && tools/bin/golangci-lint run
git diff --exit-code # diff again to ensure other checks don't change repo
go test -coverprofile=coverage.out -covermode=count -short github.com/operator-framework/operator-sdk/cmd/ansible-operator github.com/operator-framework/operator-sdk/cmd/helm-operator github.com/operator-framework/operator-sdk/cmd/operator-sdk github.com/operator-framework/operator-sdk/hack/generate/cli-doc github.com/operator-framework/operator-sdk/hack/generate/cncf-maintainers github.com/operator-framework/operator-sdk/hack/generate/samples github.com/operator-framework/operator-sdk/hack/generate/samples/internal/ansible github.com/operator-framework/operator-sdk/hack/generate/samples/internal/go github.com/operator-framework/operator-sdk/hack/generate/samples/internal/helm github.com/operator-framework/operator-sdk/hack/generate/samples/internal/pkg github.com/operator-framework/operator-sdk/hack/generate/samples/molecule github.com/operator-framework/operator-sdk/images/custom-scorecard-tests github.com/operator-framework/operator-sdk/images/scorecard-test github.com/operator-framework/operator-sdk/images/scorecard-test-kuttl github.com/operator-framework/operator-sdk/internal/annotations/metrics github.com/operator-framework/operator-sdk/internal/annotations/scorecard github.com/operator-framework/operator-sdk/internal/ansible/controller github.com/operator-framework/operator-sdk/internal/ansible/controller/status github.com/operator-framework/operator-sdk/internal/ansible/events github.com/operator-framework/operator-sdk/internal/ansible/flags github.com/operator-framework/operator-sdk/internal/ansible/metrics github.com/operator-framework/operator-sdk/internal/ansible/paramconv github.com/operator-framework/operator-sdk/internal/ansible/predicate github.com/operator-framework/operator-sdk/internal/ansible/proxy github.com/operator-framework/operator-sdk/internal/ansible/proxy/controllermap github.com/operator-framework/operator-sdk/internal/ansible/proxy/kubeconfig github.com/operator-framework/operator-sdk/internal/ansible/proxy/requestfactory github.com/operator-framework/operator-sdk/internal/ansible/runner github.com/operator-framework/operator-sdk/internal/ansible/runner/eventapi github.com/operator-framework/operator-sdk/internal/ansible/runner/fake github.com/operator-framework/operator-sdk/internal/ansible/runner/internal/inputdir github.com/operator-framework/operator-sdk/internal/ansible/watches github.com/operator-framework/operator-sdk/internal/bindata/olm github.com/operator-framework/operator-sdk/internal/cmd/ansible-operator/run github.com/operator-framework/operator-sdk/internal/cmd/ansible-operator/version github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/run github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/version github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/bundle github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/bundle/validate github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/bundle/validate/internal github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/cleanup github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/cli github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/completion github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/bundle github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/internal github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/kustomize github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/packagemanifests github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/olm github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/run github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/run/bundle github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/run/packagemanifests github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/scorecard github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/version github.com/operator-framework/operator-sdk/internal/flags github.com/operator-framework/operator-sdk/internal/generate/clusterserviceversion github.com/operator-framework/operator-sdk/internal/generate/clusterserviceversion/bases github.com/operator-framework/operator-sdk/internal/generate/clusterserviceversion/bases/definitions github.com/operator-framework/operator-sdk/internal/generate/collector github.com/operator-framework/operator-sdk/internal/generate/internal github.com/operator-framework/operator-sdk/internal/generate/packagemanifest github.com/operator-framework/operator-sdk/internal/generate/packagemanifest/bases github.com/operator-framework/operator-sdk/internal/helm/client github.com/operator-framework/operator-sdk/internal/helm/controller github.com/operator-framework/operator-sdk/internal/helm/flags github.com/operator-framework/operator-sdk/internal/helm/internal/diff github.com/operator-framework/operator-sdk/internal/helm/internal/types github.com/operator-framework/operator-sdk/internal/helm/release github.com/operator-framework/operator-sdk/internal/helm/watches github.com/operator-framework/operator-sdk/internal/kubebuilder/cmdutil github.com/operator-framework/operator-sdk/internal/kubebuilder/filesystem github.com/operator-framework/operator-sdk/internal/kubebuilder/machinery github.com/operator-framework/operator-sdk/internal/markers github.com/operator-framework/operator-sdk/internal/olm/client github.com/operator-framework/operator-sdk/internal/olm/installer github.com/operator-framework/operator-sdk/internal/olm/operator github.com/operator-framework/operator-sdk/internal/olm/operator/bundle github.com/operator-framework/operator-sdk/internal/olm/operator/packagemanifests github.com/operator-framework/operator-sdk/internal/olm/operator/registry github.com/operator-framework/operator-sdk/internal/olm/operator/registry/configmap github.com/operator-framework/operator-sdk/internal/olm/operator/registry/index github.com/operator-framework/operator-sdk/internal/plugins github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1 github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/constants github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/crd github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/kdefault github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/manager github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/prometheus github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/rbac github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/samples github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/testing github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/testing/pullpolicy github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/molecule/mdefault github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/molecule/mkind github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/playbooks github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/roles github.com/operator-framework/operator-sdk/internal/plugins/envtest github.com/operator-framework/operator-sdk/internal/plugins/golang/v2 github.com/operator-framework/operator-sdk/internal/plugins/helm/v1 github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/chartutil github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/crd github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/kdefault github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/manager github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/prometheus github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/rbac github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/samples github.com/operator-framework/operator-sdk/internal/plugins/manifests github.com/operator-framework/operator-sdk/internal/plugins/scorecard github.com/operator-framework/operator-sdk/internal/plugins/util/kustomize github.com/operator-framework/operator-sdk/internal/registry github.com/operator-framework/operator-sdk/internal/scorecard github.com/operator-framework/operator-sdk/internal/scorecard/tests github.com/operator-framework/operator-sdk/internal/testutils github.com/operator-framework/operator-sdk/internal/util/k8sutil github.com/operator-framework/operator-sdk/internal/util/projutil github.com/operator-framework/operator-sdk/internal/version github.com/operator-framework/operator-sdk/release/changelog github.com/operator-framework/operator-sdk/release/changelog/internal
?       github.com/operator-framework/operator-sdk/cmd/ansible-operator [no test files]
?       github.com/operator-framework/operator-sdk/cmd/helm-operator    [no test files]
?       github.com/operator-framework/operator-sdk/cmd/operator-sdk [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/cli-doc    [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/cncf-maintainers   [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/samples    [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/samples/internal/ansible   [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/samples/internal/go    [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/samples/internal/helm  [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/samples/internal/pkg   [no test files]
?       github.com/operator-framework/operator-sdk/hack/generate/samples/molecule   [no test files]
?       github.com/operator-framework/operator-sdk/images/custom-scorecard-tests    [no test files]
?       github.com/operator-framework/operator-sdk/images/scorecard-test    [no test files]
?       github.com/operator-framework/operator-sdk/images/scorecard-test-kuttl  [no test files]
ok      github.com/operator-framework/operator-sdk/internal/annotations/metrics 0.219s  coverage: 0.0% of statements [no tests to run]
?       github.com/operator-framework/operator-sdk/internal/annotations/scorecard   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/ansible/controller  0.940s  coverage: 58.4% of statements
ok      github.com/operator-framework/operator-sdk/internal/ansible/controller/status   0.425s  coverage: 22.2% of statements
?       github.com/operator-framework/operator-sdk/internal/ansible/events  [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/flags   [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/metrics [no test files]
ok      github.com/operator-framework/operator-sdk/internal/ansible/paramconv   0.231s  coverage: 85.7% of statements
?       github.com/operator-framework/operator-sdk/internal/ansible/predicate   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/ansible/proxy   1.027s  coverage: 0.0% of statements
?       github.com/operator-framework/operator-sdk/internal/ansible/proxy/controllermap [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/proxy/kubeconfig    [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/proxy/requestfactory    [no test files]
ok      github.com/operator-framework/operator-sdk/internal/ansible/runner  0.775s  coverage: 44.4% of statements
?       github.com/operator-framework/operator-sdk/internal/ansible/runner/eventapi [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/runner/fake [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/runner/internal/inputdir    [no test files]
ok      github.com/operator-framework/operator-sdk/internal/ansible/watches 0.521s  coverage: 92.0% of statements
?       github.com/operator-framework/operator-sdk/internal/bindata/olm [no test files]
?       github.com/operator-framework/operator-sdk/internal/cmd/ansible-operator/run    [no test files]
ok      github.com/operator-framework/operator-sdk/internal/cmd/ansible-operator/version    0.277s  coverage: 85.7% of statements
?       github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/run   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/version   0.285s  coverage: 85.7% of statements
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/bundle 1.343s  coverage: 100.0% of statements
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/bundle/validate    2.014s  coverage: 24.1% of statements
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/bundle/validate/internal   1.440s  coverage: 83.3% of statements
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/cleanup    [no test files]
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/cli    [no test files]
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/completion 0.365s  coverage: 50.0% of statements
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate   [no test files]
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/bundle    [no test files]
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/internal  2.038s  coverage: 19.8% of statements
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/kustomize [no test files]
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/generate/packagemanifests  [no test files]
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/olm    1.672s  coverage: 69.0% of statements
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/run    2.373s  coverage: 100.0% of statements
?       github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/run/bundle [no test files]
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/run/packagemanifests   1.166s  coverage: 47.1% of statements
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/scorecard  1.609s  coverage: 19.2% of statements
ok      github.com/operator-framework/operator-sdk/internal/cmd/operator-sdk/version    0.727s  coverage: 75.0% of statements
?       github.com/operator-framework/operator-sdk/internal/flags   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/generate/clusterserviceversion  14.784s coverage: 86.6% of statements
ok      github.com/operator-framework/operator-sdk/internal/generate/clusterserviceversion/bases    1.258s  coverage: 50.6% of statements
ok      github.com/operator-framework/operator-sdk/internal/generate/clusterserviceversion/bases/definitions    7.777s  coverage: 82.7% of statements
ok      github.com/operator-framework/operator-sdk/internal/generate/collector  1.816s  coverage: 28.7% of statements
?       github.com/operator-framework/operator-sdk/internal/generate/internal   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/generate/packagemanifest    1.713s  coverage: 83.6% of statements
?       github.com/operator-framework/operator-sdk/internal/generate/packagemanifest/bases  [no test files]
?       github.com/operator-framework/operator-sdk/internal/helm/client [no test files]
ok      github.com/operator-framework/operator-sdk/internal/helm/controller 3.104s  coverage: 4.1% of statements
?       github.com/operator-framework/operator-sdk/internal/helm/flags  [no test files]
?       github.com/operator-framework/operator-sdk/internal/helm/internal/diff  [no test files]
ok      github.com/operator-framework/operator-sdk/internal/helm/internal/types 3.219s  coverage: 73.3% of statements
ok      github.com/operator-framework/operator-sdk/internal/helm/release    1.547s  coverage: 12.8% of statements
ok      github.com/operator-framework/operator-sdk/internal/helm/watches    1.218s  coverage: 84.4% of statements
?       github.com/operator-framework/operator-sdk/internal/kubebuilder/cmdutil [no test files]
ok      github.com/operator-framework/operator-sdk/internal/kubebuilder/filesystem  1.873s  coverage: 72.1% of statements
ok      github.com/operator-framework/operator-sdk/internal/kubebuilder/machinery   1.391s  coverage: 93.6% of statements
?       github.com/operator-framework/operator-sdk/internal/markers [no test files]
ok      github.com/operator-framework/operator-sdk/internal/olm/client  1.612s  coverage: 17.7% of statements
ok      github.com/operator-framework/operator-sdk/internal/olm/installer   2.525s  coverage: 2.7% of statements
ok      github.com/operator-framework/operator-sdk/internal/olm/operator    1.598s  coverage: 2.9% of statements
?       github.com/operator-framework/operator-sdk/internal/olm/operator/bundle [no test files]
?       github.com/operator-framework/operator-sdk/internal/olm/operator/packagemanifests   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/olm/operator/registry   2.300s  coverage: 50.6% of statements
ok      github.com/operator-framework/operator-sdk/internal/olm/operator/registry/configmap 2.021s  coverage: 45.9% of statements
ok      github.com/operator-framework/operator-sdk/internal/olm/operator/registry/index 2.336s  coverage: 70.3% of statements
?       github.com/operator-framework/operator-sdk/internal/plugins [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1  [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/constants    [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds    [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/crd  [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/kdefault [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/manager  [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/prometheus   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/rbac [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/samples  [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/testing  [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/config/testing/pullpolicy   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/molecule/mdefault   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/molecule/mkind  [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/playbooks   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/ansible/v1/scaffolds/internal/templates/roles   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/envtest [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/golang/v2   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1 [no test files]
ok      github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/chartutil   1.498s  coverage: 85.5% of statements
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates    [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/crd [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/kdefault    [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/manager [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/prometheus  [no test files]
ok      github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/rbac    1.196s  coverage: 47.2% of statements
?       github.com/operator-framework/operator-sdk/internal/plugins/helm/v1/scaffolds/internal/templates/config/samples [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/manifests   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/scorecard   [no test files]
?       github.com/operator-framework/operator-sdk/internal/plugins/util/kustomize  [no test files]
ok      github.com/operator-framework/operator-sdk/internal/registry    1.126s  coverage: 16.8% of statements
ok      github.com/operator-framework/operator-sdk/internal/scorecard   1.073s  coverage: 43.9% of statements
ok      github.com/operator-framework/operator-sdk/internal/scorecard/tests 1.073s  coverage: 80.6% of statements
?       github.com/operator-framework/operator-sdk/internal/testutils   [no test files]
ok      github.com/operator-framework/operator-sdk/internal/util/k8sutil    0.908s  coverage: 27.4% of statements
ok      github.com/operator-framework/operator-sdk/internal/util/projutil   0.952s  coverage: 45.0% of statements
?       github.com/operator-framework/operator-sdk/internal/version [no test files]
?       github.com/operator-framework/operator-sdk/release/changelog    [no test files]
ok      github.com/operator-framework/operator-sdk/release/changelog/internal   1.306s  coverage: 90.5% of statements
git submodule update --init --recursive website/
./hack/check-links.sh
Building the website
sdk-html
Building sites … 
  Replace Autoprefixer browsers option to Browserslist config.
  Use browserslist key in package.json or .browserslistrc file.

  Using browsers option can cause errors. Browserslist config 
  can be used for Babel, Autoprefixer, postcss-normalize and other tools.

  If you really need to use option, rename it to overrideBrowserslist.

  Learn more at:
  https://github.com/browserslist/browserslist#readme
  https://twitter.com/browserslist

WARN 2020/11/11 03:11:18 Page.URL is deprecated and will be removed in a future release. Use .Permalink or .RelPermalink. If what you want is the front matter URL value, use .Params.url

                   | EN   
-------------------+------
  Pages            | 125  
  Paginator pages  |   0  
  Non-page files   |  21  
  Static files     |  43  
  Processed images |   0  
  Aliases          |   0  
  Sitemaps         |   1  
  Cleaned          |   0  

Total in 10278 ms
Checking links
Running ["ImageCheck", "LinkCheck", "ScriptCheck"] on ["/target"] on *.html... 

Checking 412 external links...
Ran on 106 files!

HTML-Proofer finished successfully.
sdk-html
exiting @ Tue Nov 10 19:11:07 PST 2020
tools/scripts/fetch kind 0.9.0
tools/scripts/fetch envtest 0.6.3
tools/scripts/fetch kubectl 1.18.8 # Install kubectl AFTER envtest because envtest includes its own kubectl binary
kubectl missing or not version '1.18.8', downloading...
[[ "`tools/bin/kind get clusters`" =~ "operator-sdk-e2e" ]] || tools/bin/kind create cluster --image="kindest/node:v1.18.8" --name operator-sdk-e2e
go build -gcflags "all=-trimpath=/Users/jberkhahn/workspace/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/jberkhahn/workspace/go/src/github.com/operator-framework" -o build/_image/scorecard-test ./images/scorecard-test
mkdir -p ./images/scorecard-test/bin && mv build/_image/scorecard-test ./images/scorecard-test/bin
docker build -t quay.io/operator-framework/scorecard-test:dev -f ./images/scorecard-test/Dockerfile ./images/scorecard-test
Sending build context to Docker daemon  45.92MB
Step 1/8 : FROM registry.access.redhat.com/ubi8/ubi-minimal:latest
 ---> d17cc1f9d041
Step 2/8 : ENV HOME=/opt/scorecard-test     USER_NAME=scorecard-test     USER_UID=1001
 ---> Using cache
 ---> 2a638249b1bb
Step 3/8 : RUN echo "${USER_NAME}:x:${USER_UID}:0:${USER_NAME} user:${HOME}:/sbin/nologin" >> /etc/passwd
 ---> Using cache
 ---> 85cf2d00bfd2
Step 4/8 : WORKDIR ${HOME}
 ---> Using cache
 ---> f0d134ee942a
Step 5/8 : ARG BIN=bin/scorecard-test
 ---> Using cache
 ---> 1abbf3e6d4ee
Step 6/8 : COPY $BIN /usr/local/bin/scorecard-test
 ---> Using cache
 ---> 6b2c16c87b0d
Step 7/8 : ENTRYPOINT ["/usr/local/bin/scorecard-test"]
 ---> Using cache
 ---> c0a4df14efff
Step 8/8 : USER ${USER_UID}
 ---> Using cache
 ---> 943de07a0f93
Successfully built 943de07a0f93
Successfully tagged quay.io/operator-framework/scorecard-test:dev
rm -rf build/_image
go build -gcflags "all=-trimpath=/Users/jberkhahn/workspace/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/jberkhahn/workspace/go/src/github.com/operator-framework" -o build/_image/custom-scorecard-tests ./images/custom-scorecard-tests
mkdir -p ./images/custom-scorecard-tests/bin && mv build/_image/custom-scorecard-tests ./images/custom-scorecard-tests/bin
docker build -t quay.io/operator-framework/custom-scorecard-tests:dev -f ./images/custom-scorecard-tests/Dockerfile ./images/custom-scorecard-tests
Sending build context to Docker daemon  26.64MB
Step 1/8 : FROM registry.access.redhat.com/ubi8/ubi-minimal:latest
 ---> d17cc1f9d041
Step 2/8 : ENV HOME=/opt/custom-scorecard-tests     USER_NAME=custom-scorecard-tests     USER_UID=1001
 ---> Using cache
 ---> aab777c5a47c
Step 3/8 : RUN echo "${USER_NAME}:x:${USER_UID}:0:${USER_NAME} user:${HOME}:/sbin/nologin" >> /etc/passwd
 ---> Using cache
 ---> d19045d5d60f
Step 4/8 : WORKDIR ${HOME}
 ---> Using cache
 ---> 39be6ff4d841
Step 5/8 : ARG BIN=bin/custom-scorecard-tests
 ---> Using cache
 ---> 405ecee7130d
Step 6/8 : COPY $BIN /usr/local/bin/custom-scorecard-tests
 ---> Using cache
 ---> cae2927d7ea7
Step 7/8 : ENTRYPOINT ["/usr/local/bin/custom-scorecard-tests"]
 ---> Using cache
 ---> 758ee03e6a04
Step 8/8 : USER ${USER_UID}
 ---> Using cache
 ---> eeae101fffc6
Successfully built eeae101fffc6
Successfully tagged quay.io/operator-framework/custom-scorecard-tests:dev
rm -rf build/_image
go test ./test/e2e-go -v -ginkgo.v
=== RUN   TestE2EGo
Running Suite: E2EGo Suite
==========================
Random Seed: 1605064322
Will run 4 of 4 specs

STEP: creating a new test context
STEP: creating a new directory
preparing testing directory: /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e-wtao
STEP: fetching the current-context
running: kubectl config current-context
STEP: preparing the prerequisites on cluster
STEP: checking API resources applied on Cluster
running: kubectl api-resources
STEP: installing OLM
running: operator-sdk olm install --version 0.15.1 --timeout 4m
STEP: initializing a project
running: operator-sdk init --project-version 3-alpha --repo github.com/example/e2e-wtao --domain example.comwtao --fetch-deps=false
STEP: by adding scorecard custom patch file
STEP: using dev image for scorecard-test
STEP: creating an API definition
running: operator-sdk create api --group barwtao --version v1alpha1 --kind Foowtao --namespaced --resource --controller --make=false
STEP: implementing the API
STEP: enabling Prometheus via the kustomization.yaml
STEP: turning off interactive prompts for all generation tasks.
STEP: checking the kustomize setup
running: make kustomize
STEP: building the project image
running: make docker-build IMG=quay.io/example/e2e-wtao:v0.0.1
STEP: loading the required images into Kind cluster
running: kind load docker-image quay.io/example/e2e-wtao:v0.0.1 --name operator-sdk-e2e
running: kind load docker-image --name operator-sdk-e2e quay.io/operator-framework/scorecard-test:dev
running: kind load docker-image --name operator-sdk-e2e quay.io/operator-framework/custom-scorecard-tests:dev
STEP: generating the operator bundle
running: make bundle IMG=quay.io/example/e2e-wtao:v0.0.1
Running Go projects built with operator-sdk 
  should run correctly locally
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_local_test.go:39
STEP: installing CRD's
running: make install
STEP: running the project
STEP: killing the project
STEP: uninstalling CRD's
running: make uninstall
•
------------------------------
Testing Go Projects with Scorecard with operator-sdk 
  should work successfully with scorecard
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_scorecard_test.go:37
STEP: running basic scorecard tests
running: operator-sdk scorecard bundle --selector=suite=basic --output=json --wait-time=60s
STEP: running custom scorecard tests
running: operator-sdk scorecard bundle --selector=suite=custom --output=json --wait-time=60s
STEP: running olm scorecard tests
running: operator-sdk scorecard bundle --selector=suite=olm --output=json --wait-time=60s

• [SLOW TEST:11.756 seconds]
Testing Go Projects with Scorecard
/Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_scorecard_test.go:27
  with operator-sdk
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_scorecard_test.go:28
    should work successfully with scorecard
    /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_scorecard_test.go:37
------------------------------
operator-sdk built with operator-sdk 
  should run correctly in a cluster
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_cluster_test.go:73
STEP: enabling Prometheus via the kustomization.yaml
STEP: deploying project on the cluster
running: make deploy IMG=quay.io/example/e2e-wtao:v0.0.1
STEP: checking if the Operator project Pod is running
STEP: getting the controller-manager pod name
running: kubectl -n e2e-wtao-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}
STEP: ensuring the created controller-manager Pod
STEP: checking the controller-manager Pod is running
running: kubectl -n e2e-wtao-system get pods e2e-wtao-controller-manager-5d6b58ff95-s24vr -o jsonpath={.status.phase}
STEP: getting the controller-manager pod name
running: kubectl -n e2e-wtao-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}
STEP: ensuring the created controller-manager Pod
STEP: checking the controller-manager Pod is running
running: kubectl -n e2e-wtao-system get pods e2e-wtao-controller-manager-5d6b58ff95-s24vr -o jsonpath={.status.phase}
STEP: getting the controller-manager pod name
running: kubectl -n e2e-wtao-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}
STEP: ensuring the created controller-manager Pod
STEP: checking the controller-manager Pod is running
running: kubectl -n e2e-wtao-system get pods e2e-wtao-controller-manager-5d6b58ff95-s24vr -o jsonpath={.status.phase}
STEP: ensuring the created ServiceMonitor for the manager
running: kubectl -n e2e-wtao-system get ServiceMonitor e2e-wtao-controller-manager-metrics-monitor
STEP: ensuring the created metrics Service for the manager
running: kubectl -n e2e-wtao-system get Service e2e-wtao-controller-manager-metrics-service
STEP: creating an instance of CR
running: kubectl -n e2e-wtao-system apply -f config/samples/barwtao_v1alpha1_foowtao.yaml
STEP: ensuring the created resource object gets reconciled in controller
running: kubectl -n e2e-wtao-system logs e2e-wtao-controller-manager-5d6b58ff95-s24vr -c manager
STEP: granting permissions to access the metrics and read the token
running: kubectl create clusterrolebinding metrics-wtao --clusterrole=e2e-wtao-metrics-reader --serviceaccount=e2e-wtao-system:default
STEP: getting the token
running: kubectl -n e2e-wtao-system get secrets -o=jsonpath={.items[0].data.token}
STEP: creating a pod with curl image
running: kubectl -n e2e-wtao-system run --generator=run-pod/v1 curl --image=curlimages/curl:7.68.0 --restart=OnFailure -- curl -v -k -H Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ii03eFg1R1ZkakVvQ3J0SkRXVnNMbXJQMjhhcnY2aVZZbW1hVC1MZm1PWnMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtd3Rhby1zeXN0ZW0iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi1maDJxcyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZDc2ZjhmYmMtM2YwOS00YTQ2LTlkZDYtNDhhMGE3MjhmY2M5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmUyZS13dGFvLXN5c3RlbTpkZWZhdWx0In0.AchYIApRHOjGnnPWxyY4jIzDTJrGCaLXZZOtcqChStBU6m7YQcjzd4JDqEcviY10iqOUSXhYo7L21jb3P1adyZbyioCA2sxFigDU3VC3GlFFV6C_-SuESbZgQL0mCP6uHjUvxkFfdJ5MEWcEDNrDsmvTwDUYSh1RT0002B4ga3rvid6QfE_zJqGcqWv4QrWl5zaFwfNHprJM_IVFOzJ4TU0LkvrK2ssywzP_LrKtDxqOUavQcvThbCwdMsmMB5UXm1ojpZYGBjWakE0GntSIhzMbwirMtHfvg40WywjLG05LfCYAjG4NGBIXCIByzm8xpg9Y7X7rXnNbeSl5GKTQgg https://e2e-wtao-controller-manager-metrics-service.e2e-wtao-system.svc:8443/metrics
STEP: validating the curl pod running as expected
running: kubectl -n e2e-wtao-system get pods curl -o jsonpath={.status.phase}
running: kubectl -n e2e-wtao-system get pods curl -o jsonpath={.status.phase}
running: kubectl -n e2e-wtao-system get pods curl -o jsonpath={.status.phase}
STEP: checking metrics endpoint serving as expected
running: kubectl -n e2e-wtao-system logs curl
STEP: cleaning up the operator and resources
running: kustomize build config/default
running: kubectl delete -f -
STEP: deleting Curl Pod created
running: kubectl -n e2e-wtao-system delete pod curl
STEP: cleaning up permissions
running: kubectl delete clusterrolebinding metrics-wtao
STEP: undeploy project
running: make undeploy
STEP: ensuring that the namespace was deleted
running: kubectl get namespace e2e-wtao-system

• [SLOW TEST:13.894 seconds]
operator-sdk
/Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_cluster_test.go:31
  built with operator-sdk
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_cluster_test.go:34
    should run correctly in a cluster
    /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_cluster_test.go:73
------------------------------
Integrating Go Projects with OLM with operator-sdk 
  should generate and run a valid OLM bundle and packagemanifests
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_olm_test.go:28
STEP: turning off interactive prompts for all generation tasks.
STEP: building the bundle
running: make bundle IMG=quay.io/example/e2e-wtao:v0.0.1
STEP: building the operator bundle image
running: make bundle-build BUNDLE_IMG=quay.io/example/e2e-wtao-bundle:v0.0.1
STEP: loading the bundle image into Kind cluster
running: kind load docker-image --name operator-sdk-e2e quay.io/example/e2e-wtao-bundle:v0.0.1
STEP: adding the 'packagemanifests' rule to the Makefile
STEP: generating the operator package manifests
running: make packagemanifests IMG=quay.io/example/e2e-wtao:v0.0.1
STEP: running the package manifests-formatted operator
running: operator-sdk run packagemanifests --install-mode AllNamespaces --version 0.0.1 --timeout 4m
STEP: destroying the deployed package manifests-formatted operator
running: operator-sdk cleanup e2e-wtao --timeout 4m

• [SLOW TEST:28.763 seconds]
Integrating Go Projects with OLM
/Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_olm_test.go:24
  with operator-sdk
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_olm_test.go:25
    should generate and run a valid OLM bundle and packagemanifests
    /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_olm_test.go:28
------------------------------
STEP: uninstalling prerequisites
STEP: uninstalling Prometheus
running: kubectl delete -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml
error when running kubectl delete during cleaning up prometheus bundle: kubectl delete -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml failed with error: Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": clusterrolebindings.rbac.authorization.k8s.io "prometheus-operator" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": clusterroles.rbac.authorization.k8s.io "prometheus-operator" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": deployments.apps "prometheus-operator" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": serviceaccounts "prometheus-operator" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": services "prometheus-operator" not found

STEP: uninstalling OLM
running: operator-sdk olm uninstall
STEP: destroying container image and work dir
running: docker rmi -f quay.io/example/e2e-wtao:v0.0.1

Ran 4 of 4 Specs in 270.708 seconds
SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 0 Skipped
--- PASS: TestE2EGo (270.71s)
PASS
ok      github.com/operator-framework/operator-sdk/test/e2e-go  273.877s
go build -gcflags "all=-trimpath=/Users/jberkhahn/workspace/go/src/github.com/operator-framework" -asmflags "all=-trimpath=/Users/jberkhahn/workspace/go/src/github.com/operator-framework" -ldflags " -X 'github.com/operator-framework/operator-sdk/internal/version.Version=v1.2.0+git' -X 'github.com/operator-framework/operator-sdk/internal/version.GitVersion=v1.2.0-4-gf20ea9e9' -X 'github.com/operator-framework/operator-sdk/internal/version.GitCommit=f20ea9e9496be3dab34a3652ec7cc7016ef1ccf2' -X 'github.com/operator-framework/operator-sdk/internal/version.KubernetesVersion=v1.18.8' -X 'github.com/operator-framework/operator-sdk/internal/version.ImageVersion=v1.2.0' "  -o build/_image/ansible-operator ./cmd/ansible-operator
mkdir -p ./images/ansible-operator/bin && mv build/_image/ansible-operator ./images/ansible-operator/bin
docker build -t quay.io/operator-framework/ansible-operator:dev -f ./images/ansible-operator/Dockerfile ./images/ansible-operator
Sending build context to Docker daemon  47.62MB
Step 1/11 : FROM registry.access.redhat.com/ubi8/ubi:latest
 ---> a1f8c9699786
Step 2/11 : RUN mkdir -p /etc/ansible   && echo "localhost ansible_connection=local" > /etc/ansible/hosts   && echo '[defaults]' > /etc/ansible/ansible.cfg   && echo 'roles_path = /opt/ansible/roles' >> /etc/ansible/ansible.cfg   && echo 'library = /usr/share/ansible/openshift' >> /etc/ansible/ansible.cfg
 ---> Using cache
 ---> 78838084efba
Step 3/11 : ENV HOME=/opt/ansible     USER_NAME=ansible     USER_UID=1001
 ---> Using cache
 ---> 280b65cc3a32
Step 4/11 : RUN yum clean all && rm -rf /var/cache/yum/*   && yum -y update   && yum install -y libffi-devel openssl-devel python36-devel gcc python3-pip python3-setuptools   && pip3 install --no-cache-dir     ipaddress     ansible-runner==1.3.4     ansible-runner-http==1.0.0     openshift~=0.10.0     ansible~=2.9     jmespath   && yum remove -y gcc libffi-devel openssl-devel python36-devel   && yum clean all   && rm -rf /var/cache/yum
 ---> Using cache
 ---> c79c08ecc717
Step 5/11 : RUN echo "${USER_NAME}:x:${USER_UID}:0:${USER_NAME} user:${HOME}:/sbin/nologin" >> /etc/passwd   && mkdir -p ${HOME}/.ansible/tmp   && chown -R ${USER_UID}:0 ${HOME}   && chmod -R ug+rwx ${HOME}
 ---> Using cache
 ---> 6dbbf9b4fcb1
Step 6/11 : RUN TINIARCH=$(case $(arch) in x86_64) echo -n amd64 ;; ppc64le) echo -n ppc64el ;; aarch64) echo -n arm64 ;; *) echo -n $(arch) ;; esac)   && curl -L -o /tini https://github.com/krallin/tini/releases/latest/download/tini-$TINIARCH   && chmod +x /tini
 ---> Using cache
 ---> 3ff47f83ab9e
Step 7/11 : WORKDIR ${HOME}
 ---> Using cache
 ---> dda0972ac929
Step 8/11 : USER ${USER_UID}
 ---> Using cache
 ---> 30633512868f
Step 9/11 : ARG BIN=bin/ansible-operator
 ---> Using cache
 ---> ebf398d464a6
Step 10/11 : COPY $BIN /usr/local/bin/ansible-operator
 ---> Using cache
 ---> 2ca1f657cbcd
Step 11/11 : ENTRYPOINT ["/tini", "--", "/usr/local/bin/ansible-operator", "run", "--watches-file=./watches.yaml"]
 ---> Using cache
 ---> d4d4d72e7beb
Successfully built d4d4d72e7beb
Successfully tagged quay.io/operator-framework/ansible-operator:dev
rm -rf build/_image
go test -count=1 ./internal/ansible/proxy/...
ok      github.com/operator-framework/operator-sdk/internal/ansible/proxy   1.999s
?       github.com/operator-framework/operator-sdk/internal/ansible/proxy/controllermap [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/proxy/kubeconfig    [no test files]
?       github.com/operator-framework/operator-sdk/internal/ansible/proxy/requestfactory    [no test files]
go test ./test/e2e-ansible -v -ginkgo.v
=== RUN   TestE2EAnsible
Running Suite: E2EAnsible Suite
===============================
Random Seed: 1605064611
Will run 4 of 4 specs

STEP: creating a new test context
STEP: copying sample to a temporary e2e directory
STEP: fetching the current-context
running: kubectl config current-context
STEP: preparing the prerequisites on cluster
STEP: checking API resources applied on Cluster
running: kubectl api-resources
STEP: installing OLM
running: operator-sdk olm install --version 0.15.1 --timeout 4m
STEP: using dev image for scorecard-test
STEP: replacing project Dockerfile to use ansible base image with the dev tag
STEP: adding Memcached mock task to the role
STEP: creating an API definition to add a task to delete the config map
running: operator-sdk create api --group cache --version v1alpha1 --kind Memfin --generate-role
STEP: adding task to delete config map
STEP: adding to watches finalizer and blacklist
STEP: create API to test watching multiple GVKs
running: operator-sdk create api --group cache --version v1alpha1 --kind Foo --generate-role
STEP: adding RBAC permissions for the Memcached Kind
STEP: building the project image
running: make docker-build IMG=quay.io/example/e2e-xeuc:v0.0.1
STEP: loading the required images into Kind cluster
running: kind load docker-image quay.io/example/e2e-xeuc:v0.0.1 --name operator-sdk-e2e
running: kind load docker-image --name operator-sdk-e2e quay.io/operator-framework/scorecard-test:dev
STEP: building the bundle
running: make bundle IMG=quay.io/example/e2e-xeuc:v0.0.1
Running ansible projects built with operator-sdk 
  should run correctly locally
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_local_test.go:39
STEP: installing CRD's
running: make install
STEP: running the project
STEP: killing the project
STEP: uninstalling CRD's
running: make uninstall
•
------------------------------
Integrating ansible Projects with OLM with operator-sdk 
  should generate and run a valid OLM bundle and packagemanifests
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_olm_test.go:28
STEP: building the bundle
running: make bundle IMG=quay.io/example/e2e-xeuc:v0.0.1
STEP: building the operator bundle image
running: make bundle-build BUNDLE_IMG=quay.io/example/e2e-xeuc-bundle:v0.0.1
STEP: loading the bundle image into Kind cluster
running: kind load docker-image --name operator-sdk-e2e quay.io/example/e2e-xeuc-bundle:v0.0.1
STEP: adding the 'packagemanifests' rule to the Makefile
STEP: generating the operator package manifests
running: make packagemanifests IMG=quay.io/example/e2e-xeuc:v0.0.1
STEP: running the package
running: operator-sdk run packagemanifests --install-mode AllNamespaces --version 0.0.1 --timeout 4m
STEP: destroying the deployed package manifests-formatted operator
running: operator-sdk cleanup memcached-operator --timeout 4m

• [SLOW TEST:18.160 seconds]
Integrating ansible Projects with OLM
/Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_olm_test.go:24
  with operator-sdk
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_olm_test.go:25
    should generate and run a valid OLM bundle and packagemanifests
    /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_olm_test.go:28
------------------------------
Running ansible projects built with operator-sdk 
  should run correctly in a cluster
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_cluster_test.go:82
STEP: checking samples
STEP: deploying project on the cluster
running: make deploy IMG=quay.io/example/e2e-xeuc:v0.0.1
STEP: checking if the Operator project Pod is running
STEP: getting the controller-manager pod name
running: kubectl -n memcached-operator-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}
STEP: ensuring the created controller-manager Pod
STEP: checking the controller-manager Pod is running
running: kubectl -n memcached-operator-system get pods memcached-operator-controller-manager-77f6b46687-7htnq -o jsonpath={.status.phase}
STEP: getting the controller-manager pod name
running: kubectl -n memcached-operator-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}
STEP: ensuring the created controller-manager Pod
STEP: checking the controller-manager Pod is running
running: kubectl -n memcached-operator-system get pods memcached-operator-controller-manager-77f6b46687-7htnq -o jsonpath={.status.phase}
STEP: getting the controller-manager pod name
running: kubectl -n memcached-operator-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}
STEP: ensuring the created controller-manager Pod
STEP: checking the controller-manager Pod is running
running: kubectl -n memcached-operator-system get pods memcached-operator-controller-manager-77f6b46687-7htnq -o jsonpath={.status.phase}
STEP: ensuring the created ServiceMonitor for the manager
running: kubectl -n memcached-operator-system get ServiceMonitor memcached-operator-controller-manager-metrics-monitor
STEP: ensuring the created metrics Service for the manager
running: kubectl -n memcached-operator-system get Service memcached-operator-controller-manager-metrics-service
STEP: create custom resource (Memcached CR)
running: kubectl apply -f /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e-xeuc/config/samples/cache_v1alpha1_memcached.yaml
STEP: create custom resource (Foo CR)
running: kubectl apply -f /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e-xeuc/config/samples/cache_v1alpha1_foo.yaml
STEP: create custom resource (Memfin CR)
running: kubectl apply -f /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e-xeuc/config/samples/cache_v1alpha1_memfin.yaml
STEP: ensuring the CR gets reconciled
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
STEP: ensuring no liveness probe fail events
STEP: getting the controller-manager events
running: kubectl -n memcached-operator-system get events --field-selector involvedObject.name=memcached-operator-controller-manager-77f6b46687-7htnq
STEP: getting memcached deploy by labels
running: kubectl get deployment -l app=memcached -o jsonpath={..metadata.name}
STEP: checking the Memcached CR deployment status
running: kubectl rollout status deployment memcached-sample-memcached
STEP: ensuring the created Service for the Memcached CR
running: kubectl get Service -l app=memcached
STEP: Verifying that a config map owned by the CR has been created
running: kubectl get configmap test-blacklist-watches
STEP: Ensuring that config map requests skip the cache.
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
STEP: scaling deployment replicas to 2
running: kubectl scale deployment memcached-sample-memcached --replicas 2
STEP: verifying the deployment automatically scales back down to 1
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
STEP: updating size to 2 in the CR manifest
STEP: applying CR manifest with size: 2
running: kubectl apply -f /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e-xeuc/config/samples/cache_v1alpha1_memcached.yaml
STEP: ensuring the CR gets reconciled after patching it
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
STEP: checking Deployment replicas spec is equals 2
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
running: kubectl get deployment memcached-sample-memcached -o jsonpath={..spec.replicas}
STEP: granting permissions to access the metrics and read the token
running: kubectl create clusterrolebinding memcached-operator-metrics-reader --clusterrole=memcached-operator-metrics-reader --serviceaccount=memcached-operator-system:default
STEP: getting the token
running: kubectl -n memcached-operator-system get secrets -o=jsonpath={.items[0].data.token}
STEP: creating a pod with curl image
running: kubectl -n memcached-operator-system run --generator=run-pod/v1 curl --image=curlimages/curl:7.68.0 --restart=OnFailure -- curl -v -k -H Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ii03eFg1R1ZkakVvQ3J0SkRXVnNMbXJQMjhhcnY2aVZZbW1hVC1MZm1PWnMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtZW1jYWNoZWQtb3BlcmF0b3Itc3lzdGVtIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tamZ3d2ciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI0MmE0OWE2LTc1ZDgtNGFiNy1iNGI0LWRiOGRhNDAyYTU2MyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptZW1jYWNoZWQtb3BlcmF0b3Itc3lzdGVtOmRlZmF1bHQifQ.o0uAa0OtHQQAQ_X0Q028I3JiidegmSxiSFgER2gk5I8mOtnxfl_ztR3GWCmmnnMmOeep2z-ikMvsTmJfYKL987k5FP5fH-PGpYHc8QwetVmHCb_tHPibIlIsfeimOsjqwLWeAmsgSJdM58hNiq5Tc1CNRZo8yOx449BFQ2pxgODFrltEKrbnfYXSMp-6kUPUBg1BoRLDSHD7ZNZN4uYOvoIvZFji3JwmmBv-uku09oADMOz5ZtZlucHFSj6X0X9-ITwOnNKo1KbK42Aai7ynuOMZI0unGgOPerE4B9I7g632ca1IWNfKuBzON8p8SqS-6bYJvvIvaCHe1GxyULNcjw https://memcached-operator-controller-manager-metrics-service.memcached-operator-system.svc:8443/metrics
STEP: validating the curl pod running as expected
running: kubectl -n memcached-operator-system get pods curl -o jsonpath={.status.phase}
running: kubectl -n memcached-operator-system get pods curl -o jsonpath={.status.phase}
running: kubectl -n memcached-operator-system get pods curl -o jsonpath={.status.phase}
STEP: checking metrics endpoint serving as expected
running: kubectl -n memcached-operator-system logs curl
STEP: getting the CR namespace token
running: kubectl get Memcached memcached-sample -o=jsonpath={..metadata.namespace}
STEP: ensuring the operator metrics contains a `resource_created_at` metric for the Memcached CR
running: kubectl -n memcached-operator-system logs curl
STEP: ensuring the operator metrics contains a `resource_created_at` metric for the Foo CR
running: kubectl -n memcached-operator-system logs curl
STEP: ensuring the operator metrics contains a `resource_created_at` metric for the Memfin CR
running: kubectl -n memcached-operator-system logs curl
STEP: creating a configmap that the finalizer should remove
running: kubectl create configmap deleteme
STEP: deleting Memcached CR manifest
running: kubectl delete -f /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e-xeuc/config/samples/cache_v1alpha1_memcached.yaml
STEP: ensuring the CR gets reconciled successfully
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
running: kubectl -n memcached-operator-system logs memcached-operator-controller-manager-77f6b46687-7htnq -c manager
STEP: ensuring that Memchaced Deployment was removed
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
running: kubectl get deployment memcached-sample-memcached
STEP: deleting Curl Pod created
running: kubectl delete pod curl
STEP: deleting CR instances created
running: kubectl delete -f /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e-xeuc/config/samples/cache_v1alpha1_memcached.yaml
running: kubectl delete -f /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e-xeuc/config/samples/cache_v1alpha1_foo.yaml
running: kubectl delete -f /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e-xeuc/config/samples/cache_v1alpha1_memfin.yaml
STEP: cleaning up permissions
running: kubectl delete clusterrolebinding memcached-operator-metrics-reader
STEP: undeploy project
running: make undeploy
STEP: ensuring that the namespace was deleted
running: kubectl get namespace memcached-operator-system

• Failure [165.374 seconds]
Running ansible projects
/Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_cluster_test.go:31
  built with operator-sdk
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_cluster_test.go:39
    should run correctly in a cluster [It]
    /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_cluster_test.go:82

    Timed out after 120.001s.
    Expected failure, but got no error.

    /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_cluster_test.go:387
------------------------------
Testing Ansible Projects with Scorecard with operator-sdk 
  should work successfully with scorecard
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_scorecard_test.go:37
STEP: running basic scorecard tests
running: operator-sdk scorecard bundle --selector=suite=basic --output=json --wait-time=60s
STEP: running olm scorecard tests
running: operator-sdk scorecard bundle --selector=suite=olm --output=json --wait-time=60s
    - Name:  olm-spec-descriptors
      Expected:  fail
      Output:  fail
    - Name:  olm-bundle-validation
      Expected:  pass
      Output:  pass
    - Name:  olm-crds-have-validation
      Expected:  fail
      Output:  fail
    - Name:  olm-status-descriptors
      Expected:  fail
      Output:  fail
    - Name:  olm-crds-have-resources
      Expected:  fail
      Output:  fail

• [SLOW TEST:7.447 seconds]
Testing Ansible Projects with Scorecard
/Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_scorecard_test.go:27
  with operator-sdk
  /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_scorecard_test.go:28
    should work successfully with scorecard
    /Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_scorecard_test.go:37
------------------------------
STEP: uninstalling prerequisites
STEP: uninstalling Prometheus
running: kubectl delete -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml
error when running kubectl delete during cleaning up prometheus bundle: kubectl delete -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml failed with error: Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": clusterrolebindings.rbac.authorization.k8s.io "prometheus-operator" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": clusterroles.rbac.authorization.k8s.io "prometheus-operator" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": deployments.apps "prometheus-operator" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": serviceaccounts "prometheus-operator" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.33/bundle.yaml": services "prometheus-operator" not found

STEP: uninstalling OLM
running: operator-sdk olm uninstall
STEP: destroying container image and work dir
running: docker rmi -f quay.io/example/e2e-xeuc:v0.0.1

Summarizing 1 Failure:

[Fail] Running ansible projects built with operator-sdk [It] should run correctly in a cluster 
/Users/jberkhahn/workspace/go/src/github.com/operator-framework/operator-sdk/test/e2e-ansible/e2e_ansible_cluster_test.go:387

Ran 4 of 4 Specs in 353.762 seconds
FAIL! -- 3 Passed | 1 Failed | 0 Pending | 0 Skipped
--- FAIL: TestE2EAnsible (353.76s)
FAIL
FAIL    github.com/operator-framework/operator-sdk/test/e2e-ansible 354.007s
FAIL
make: *** [test-e2e-ansible] Error 1

joelanford commented 4 years ago

@jberkhahn Can you try this: make test-e2e-teardown test-e2e-ansible?

That will cause the e2e kind instance to be deleted and re-created, ensuring a totally fresh cluster for the test.

Typically, I would expect make test-e2e-ansible to work multiple times without restarting the kind cluster, so if my above suggestion fixes the problem and the problem is easily reproducible, we probably need to figure out what's causing the test not to clean up after itself.

camilamacedo86 commented 4 years ago

Hi @jberkhahn.

Please do the test with test-e2e-helm instead of it. The molecule tests that are now called with ith the makefile target test-e2e-ansible has pre-requirements that are not set locally. Let's keep it apart at the first moment. Btw I never was able to run the molecule one locally. So, first, let's ensure that we are able to run the test/e2e locally as it was possible to get done before.

PS.: I want to be able to run the tests via the IDE as I did before to troubleshooting and debug the tests which mean only build the images and load them on the kind. I have the Kind installed locally already. I do not want the makefile targets to install the Kind because it breaks my local env setup.

c/c @joelanford

jberkhahn commented 4 years ago

TODO: add a canary test that checks if molecule is installed/correct version and fails quickly before actually trying to do ansible stuff

jberkhahn commented 3 years ago

@joelanford make test-e2e-teardown test-e2e-ansible works fine. upon running things individually its definitely the molecule tests that are causing the big problems.

next thing i encountered is in the integration tests in hack/tests/integration.sh we use the mktemp command with the -p flag which the osx version doesn't look like it has.

camilamacedo86 commented 3 years ago

By running make test-e2e-teardown test-e2e-go I face a timeout issue to install OLM. It does not work:

unning: operator-sdk olm install --version 0.15.1 --timeout 4m
Failure [26.639 seconds]
[BeforeSuite] BeforeSuite 
/Users/camilamacedo/go/src/github.com/operator-framework/operator-sdk/test/e2e-go/e2e_go_suite_test.go:42

  Expected success, but got an error:
      <*errors.errorString | 0xc000216240>: {
          s: "operator-sdk olm install --version 0.15.1 --timeout 4m failed with error: time=\"2020-11-13T13:10:37-03:00\" level=info msg=\"Fetching CRDs for version \\\"0.15.1\\\"\"\ntime=\"2020-11-13T13:10:37-03:00\" level=info msg=\"Using locally stored resource manifests\"\nI1113 13:10:38.538900    6843 request.go:621] Throttling request took 1.039847166s, request: GET:https://127.0.0.1:56716/apis/autoscaling/v1?timeout=32s\ntime=\"2020-11-13T13:10:43-03:00\" level=info msg=\"Creating CRDs and resources\"\ntime=\"2020-11-13T13:10:43-03:00\" level=info msg=\"  Creating CustomResourceDefinition \\\"catalogsources.operators.coreos.com\\\"\"\ntime=\"2020-11-13T13:10:44-03:00\" level=info msg=\"  Creating CustomResourceDefinition \\\"clusterserviceversions.operators.coreos.com\\\"\"\ntime=\"2020-11-13T13:10:52-03:00\" level=info msg=\"  Creating CustomResourceDefinition \\\"installplans.operators.coreos.com\\\"\"\ntime=\"2020-11-13T13:10:53-03:00\" level=info msg=\"  Creating CustomResourceDefinition \\\"operatorgroups.operators.coreos.com\\\"\"\ntime=\"2020-11-13T13:10:53-03:00\" level=info msg=\"  Creating CustomResourceDefinition \\\"subscriptions.operators.coreos.com\\\"\"\ntime=\"2020-11-13T13:10:55-03:00\" level=info msg=\"  Creating Namespace \\\"olm\\\"\"\ntime=\"2020-11-13T13:10:55-03:00\" level=info msg=\"  Creating Namespace \\\"operators\\\"\"\ntime=\"2020-11-13T13:10:55-03:00\" level=info msg=\"  Creating ServiceAccount \\\"olm/olm-operator-serviceaccount\\\"\"\ntime=\"2020-11-13T13:10:55-03:00\" level=info msg=\"  Creating ClusterRole \\\"system:controller:operator-lifecycle-manager\\\"\"\ntime=\"2020-11-13T13:10:55-03:00\" level=info msg=\"  Creating ClusterRoleBinding \\\"olm-operator-binding-olm\\\"\"\ntime=\"2020-11-13T13:10:55-03:00\" level=info msg=\"  Creating Deployment \\\"olm/olm-operator\\\"\"\ntime=\"2020-11-13T13:10:56-03:00\" level=info msg=\"  Creating Deployment \\\"olm/catalog-operator\\\"\"\ntime=\"2020-11-13T13:10:56-03:00\" level=info msg=\"  Creating ClusterRole \\\"aggregate-olm-edit\\\"\"\ntime=\"2020-11-13T13:10:56-03:00\" level=info msg=\"  Creating ClusterRole \\\"aggregate-olm-view\\\"\"\ntime=\"2020-11-13T13:10:56-03:00\" level=info msg=\"  Creating OperatorGroup \\\"operators/global-operators\\\"\"\nI1113 13:10:58.045022    6843 request.go:621] Throttling request took 1.039887335s, request: GET:https://127.0.0.1:56716/apis/admissionregistration.k8s.io/v1?timeout=32s\ntime=\"2020-11-13T13:10:58-03:00\" level=fatal msg=\"Failed to install OLM version \\\"0.15.1\\\": failed to create CRDs and resources: no matches for kind \\\"OperatorGroup\\\" in version \\\"operators.coreos.com/v1\\\"\"\n",
      }
      operator-sdk olm install --version 0.15.1 --timeout 4m failed with error: time="2020-11-13T13:10:37-03:00" level=info msg="Fetching CRDs for version \"0.15.1\""
      time="2020-11-13T13:10:37-03:00" level=info msg="Using locally stored resource manifests"
      I1113 13:10:38.538900    6843 request.go:621] Throttling request took 1.039847166s, request: GET:https://127.0.0.1:56716/apis/autoscaling/v1?timeout=32s
      time="2020-11-13T13:10:43-03:00" level=info msg="Creating CRDs and resources"
      time="2020-11-13T13:10:43-03:00" level=info msg="  Creating CustomResourceDefinition \"catalogsources.operators.coreos.com\""
      time="2020-11-13T13:10:44-03:00" level=info msg="  Creating CustomResourceDefinition \"clusterserviceversions.operators.coreos.com\""
      time="2020-11-13T13:10:52-03:00" level=info msg="  Creating CustomResourceDefinition \"installplans.operators.coreos.com\""
      time="2020-11-13T13:10:53-03:00" level=info msg="  Creating CustomResourceDefinition \"operatorgroups.operators.coreos.com\""
      time="2020-11-13T13:10:53-03:00" level=info msg="  Creating CustomResourceDefinition \"subscriptions.operators.coreos.com\""
      time="2020-11-13T13:10:55-03:00" level=info msg="  Creating Namespace \"olm\""
      time="2020-11-13T13:10:55-03:00" level=info msg="  Creating Namespace \"operators\""
      time="2020-11-13T13:10:55-03:00" level=info msg="  Creating ServiceAccount \"olm/olm-operator-serviceaccount\""
      time="2020-11-13T13:10:55-03:00" level=info msg="  Creating ClusterRole \"system:controller:operator-lifecycle-manager\""
      time="2020-11-13T13:10:55-03:00" level=info msg="  Creating ClusterRoleBinding \"olm-operator-binding-olm\""
      time="2020-11-13T13:10:55-03:00" level=info msg="  Creating Deployment \"olm/olm-operator\""
      time="2020-11-13T13:10:56-03:00" level=info msg="  Creating Deployment \"olm/catalog-operator\""
      time="2020-11-13T13:10:56-03:00" level=info msg="  Creating ClusterRole \"aggregate-olm-edit\""
      time="2020-11-13T13:10:56-03:00" level=info msg="  Creating ClusterRole \"aggregate-olm-view\""
      time="2020-11-13T13:10:56-03:00" level=info msg="  Creating OperatorGroup \"operators/global-operators\""
      I1113 13:10:58.045022    6843 request.go:621] Throttling request took 1.039887335s, request: GET:https://127.0.0.1:56716/apis/admissionregistration.k8s.io/v1?timeout=32s
      time="2020-11-13T13:10:58-03:00" level=fatal msg="Failed to install OLM version \"0.15.1\": failed to create CRDs and resources: no matches for kind \"OperatorGroup\" in version \"operators.coreos.com/v1\""

I do not think that it is related to the SO and I think we have after changes more than one scenario to fix which may require we open more than one issue.

camilamacedo86 commented 3 years ago

By running make test-e2e-teardown test-e2e-ansible it still not working in my local env. Also, it brokes my local setup. Then, just to clarifies what would be my expectations and motivations:

Then, IMO it means that we need a target to build the images and load then in the KIND and the SDK binary built as before containing the bindata.

c/c @estroz @joelanford @jberkhahn, @jmrodri @asmacdo, @varshaprasad96

estroz commented 3 years ago

it brokes my local setup

How did e2e tests break your local setup?

camilamacedo86 commented 3 years ago

How did e2e tests break your local setup?

The problems occurred shows to be related to the sdk control-pane 1.19.4 (makefile line) installed:

Screen Shot 2020-12-16 at 17 28 14

It has been causing side affects for me locally such as; An panic issue to startup the kind with kind create cluster starts to occur and the make generate gets stuck to clean running: docker rmi -f quay.io/example/memcached-operator:v0.0.1:

Screen Shot 2020-12-16 at 17 46 43

Usually, I am able to solve the problem by removing the kind cluster 1.19.4 via kind delete clusters operator-sdk-e2e. If not, the solution is to stop and remove all fully and re-start the docker which is not nice. Also, not that the latest kind node version that will be installed for mac currently via the brew formula is the version 1.19.1.

Then, note that I have kind and env test bin setup locally already and to run/debug the tests I would only need a target that provides an SDK binary image with the OLM bindata inject in a way that works such as it is done for the release and was done before its customizations in the Makefile when the previous make install target was executed and then, that builds the images and load them on my kind.

openshift-bot commented 3 years ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 3 years ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

openshift-bot commented 3 years ago

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen. Mark the issue as fresh by commenting /remove-lifecycle rotten. Exclude this issue from closing again by commenting /lifecycle frozen.

/close

openshift-ci[bot] commented 3 years ago

@openshift-bot: Closing this issue.

In response to [this](https://github.com/operator-framework/operator-sdk/issues/4151#issuecomment-886207147): >Rotten issues close after 30d of inactivity. > >Reopen the issue by commenting `/reopen`. >Mark the issue as fresh by commenting `/remove-lifecycle rotten`. >Exclude this issue from closing again by commenting `/lifecycle frozen`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.