kubernetes-sigs / cluster-api

Home for Cluster API, a subproject of sig-cluster-lifecycle
https://cluster-api.sigs.k8s.io
Apache License 2.0
3.5k stars 1.3k forks source link

Add support for running E2E tests on WSL #10103

Closed Jont828 closed 5 months ago

Jont828 commented 7 months ago

What steps did you take and what happened?

When using GetWorkloadCluster as follows with the Docker provider on WSL, we are unable to reach the API server.

clusterProxy := input.ClusterProxy.GetWorkloadCluster(ctx, input.Namespace, input.ClusterName)
g.Expect(clusterProxy.GetClient().Get(ctx, client.ObjectKey{Name: kubesystem}, ns)).To(Succeed(), "Failed to get kube-system namespace")
<*fmt.wrapError | 0xc001c74040>:
failed to get API group resources: unable to retrieve the complete list of server APIs: v1: Get "https://172.18.0.4:6443/api/v1": dial tcp 172.18.0.4:6443: i/o timeout
{
    msg: "failed to get API group resources: unable to retrieve the complete list of server APIs: v1: Get \"https://172.18.0.4:6443/api/v1\": dial tcp 172.18.0.4:6443: i/o timeout",
    err: <*apiutil.ErrResourceDiscoveryFailed | 0xc00008e110>{
        {Group: "", Version: "v1"}: <*url.Error | 0xc001ebc000>{
            Op: "Get",
            URL: "https://172.18.0.4:6443/api/v1",
            Err: <*net.OpError | 0xc001a1c050>{
                Op: "dial",
                Net: "tcp",
                Source: nil,
                Addr: <*net.TCPAddr | 0xc0007d3dd0>{
                    IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 172, 18, 0, 4],
                    Port: 6443,
                    Zone: "",
                },
                Err: <*net.timeoutError | 0x5764c60>{},
            },
        },
    },
}

The API server address shown is the one we need to fix for the kubeconfig. It seems like this check to fix the kubeconfig only applies to darwin, but we might need it by default.

What did you expect to happen?

To be able to query the workload cluster using the client we got from the Cluster proxy.

Cluster API version

v1.6.0

Kubernetes version

No response

Anything else you would like to add?

No response

Label(s) to be applied

/kind bug One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.

fabriziopandini commented 7 months ago

AFAIK no one ever worked on getting the E2E tests to work on WSL, what you are describing seems to me more a lack of feature than a bug.

/remove kind/bug /kind feature /triage accepted /help

k8s-ci-robot commented 7 months ago

@fabriziopandini: This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-help command.

In response to [this](https://github.com/kubernetes-sigs/cluster-api/issues/10103): >AFAIK no one ever worked on getting the E2E tests to work on WSL, what you are describing seems to me more a lack of feature than a bug. > >/remove kind/bug >/kind feature >/triage accepted >/help Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
Jont828 commented 7 months ago

Thanks for the clarification, I'd be willing to take a crack at it if it's possible. Do you think it's something we can try to implement, or is it just a platform limitation all together?

fabriziopandini commented 7 months ago

Do you think it's something we can try to implement, or is it just a platform limitation all together?

I don't know, I never took time to look properly into WSL 😅

sbueringer commented 7 months ago

I think that is to be figured out by whoever like to work on this issue :)

Jont828 commented 7 months ago

Sounds good. I'm willing to look into it but can't guarantee I can figure it out.

nasusoba commented 5 months ago

Could we simply remove the constraint goruntime.GOOS == "darwin" and always run fixConfig if the infrastructure is Docker? I think the code in fixConfig works for Linux Docker engine/ Darwin/ WSL.

sbueringer commented 5 months ago

Was someone able to successfully run it with this change on WSL?

nasusoba commented 5 months ago

This will works for WSL, but I do not have an environment for normal Linux docker engine, could someone kindly test it for Linux Docker engine?? Log I collected for reference (I have selected the autoscaler test as it is using workload Cluster proxy):

$ make GINKGO_FOCUS="When using the autoscaler with Cluster API using ClusterClass" test-e2e
....

When using the autoscaler with Cluster API using ClusterClass [ClusterClass] Should create a workload cluster
/home/runyuzheng/cluster-api/test/e2e/autoscaler.go:99
  STEP: Creating a namespace for hosting the "autoscaler" test spec @ 04/09/24 14:47:48.987
  INFO: Creating namespace autoscaler-udvnsm
  INFO: Creating event watcher for namespace "autoscaler-udvnsm"
  STEP: Creating a workload cluster @ 04/09/24 14:47:48.999
  INFO: Creating the workload cluster with name "autoscaler-6iuv5f" using the "topology-autoscaler" template (Kubernetes v1.29.2, 1 control-plane machines, 0 worker machines)
  INFO: Getting the cluster template yaml
  INFO: clusterctl config cluster autoscaler-6iuv5f --infrastructure docker --kubernetes-version v1.29.2 --control-plane-machine-count 1 --worker-machine-count 0 --flavor topology-autoscaler
  INFO: Creating the workload cluster with name "autoscaler-6iuv5f" from the provided yaml
  INFO: Applying the cluster template yaml of cluster autoscaler-udvnsm/autoscaler-6iuv5f
Running kubectl apply --kubeconfig /tmp/e2e-kind4227624936 -f -
stderr:

stdout:
clusterclass.cluster.x-k8s.io/quick-start created
dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created
kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created
dockermachinepooltemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinepooltemplate created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-md-default-worker-bootstraptemplate created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-mp-default-worker-bootstraptemplate created
configmap/cni-autoscaler-6iuv5f-crs-0 created
clusterresourceset.addons.cluster.x-k8s.io/autoscaler-6iuv5f-crs-0 created
cluster.cluster.x-k8s.io/autoscaler-6iuv5f created

  INFO: Waiting for the cluster infrastructure of cluster autoscaler-udvnsm/autoscaler-6iuv5f to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 04/09/24 14:47:51.132
  INFO: Waiting for control plane of cluster autoscaler-udvnsm/autoscaler-6iuv5f to be initialized
  INFO: Waiting for the first control plane machine managed by autoscaler-udvnsm/autoscaler-6iuv5f-jc6kc to be provisioned
  STEP: Waiting for one control plane node to exist @ 04/09/24 14:48:01.152
  INFO: Waiting for control plane of cluster autoscaler-udvnsm/autoscaler-6iuv5f to be ready
  INFO: Waiting for control plane autoscaler-udvnsm/autoscaler-6iuv5f-jc6kc to be ready (implies underlying nodes to be ready as well)
  STEP: Waiting for the control plane to be ready @ 04/09/24 14:48:31.173
  STEP: Checking all the control plane machines are in the expected failure domains @ 04/09/24 14:48:51.187
  INFO: Waiting for the machine deployments of cluster autoscaler-udvnsm/autoscaler-6iuv5f to be provisioned
  STEP: Waiting for the workload nodes to exist @ 04/09/24 14:48:51.198
  STEP: Checking all the machines controlled by autoscaler-6iuv5f-md-0-nj999 are in the "fd4" failure domain @ 04/09/24 14:50:01.262
  INFO: Waiting for the machine pools of cluster autoscaler-udvnsm/autoscaler-6iuv5f to be provisioned
  STEP: Installing the autoscaler on the workload cluster @ 04/09/24 14:50:01.306
  STEP: Creating the autoscaler deployment in the workload cluster @ 04/09/24 14:50:01.306
Running kubectl apply --kubeconfig /tmp/e2e-kubeconfig468585768 -f -
stderr:

stdout:
namespace/cluster-autoscaler-system created
secret/kubeconfig-management-cluster created
serviceaccount/cluster-autoscaler created
clusterrolebinding.rbac.authorization.k8s.io/cluster-autoscaler-workload created
clusterrole.rbac.authorization.k8s.io/cluster-autoscaler-workload created
deployment.apps/cluster-autoscaler created

  STEP: Wait for the autoscaler deployment and collect logs @ 04/09/24 14:50:02.172
  STEP: Waiting for deployment cluster-autoscaler-system/cluster-autoscaler to be available @ 04/09/24 14:50:02.184
  STEP: Creating workload that forces the system to scale up @ 04/09/24 14:50:22.309
  INFO: Creating log watcher for controller cluster-autoscaler-system/cluster-autoscaler, pod cluster-autoscaler-7658975c8d-2wqrt, container cluster-autoscaler
  STEP: Create a scale up deployment with resource requests to force scale up @ 04/09/24 14:50:22.309
  STEP: Create scale up deployment @ 04/09/24 14:50:22.316
  STEP: Wait for the scale up deployment to become ready (this implies machines to be created) @ 04/09/24 14:50:22.324
  STEP: Waiting for deployment default/scale-up to be available @ 04/09/24 14:50:22.325
  STEP: Checking the MachineDeployment is scaled up @ 04/09/24 14:51:02.345
  STEP: Disabling the autoscaler @ 04/09/24 14:51:02.35
  INFO: Dropping the cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size and cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size annotations from the MachineDeployments in ClusterTopology
  INFO: Wait for the annotations to be dropped from the MachineDeployments
  STEP: Checking we can manually scale up the MachineDeployment @ 04/09/24 14:51:12.385
  INFO: Scaling machine deployment topology md-0 to 4 replicas
  INFO: Waiting for correct number of replicas to exist
  STEP: Checking enabling autoscaler will scale down the MachineDeployment to correct size @ 04/09/24 14:51:22.445
  INFO: Add the cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size and cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size annotations to the MachineDeployments in ClusterTopology
  INFO: Wait for the annotations to applied on the MachineDeployments
  STEP: Checking the MachineDeployment is scaled down @ 04/09/24 14:51:32.469
  STEP: PASSED! @ 04/09/24 14:53:02.507
  STEP: Dumping logs from the "autoscaler-6iuv5f" workload cluster @ 04/09/24 14:53:02.507
  STEP: Dumping all the Cluster API resources in the "autoscaler-udvnsm" namespace @ 04/09/24 14:53:03.464
  STEP: Dumping Pods and Nodes of Cluster autoscaler-udvnsm/autoscaler-6iuv5f @ 04/09/24 14:53:03.693
  STEP: Deleting cluster autoscaler-udvnsm/autoscaler-6iuv5f @ 04/09/24 14:53:03.774
  STEP: Deleting cluster autoscaler-udvnsm/autoscaler-6iuv5f @ 04/09/24 14:53:03.779
  INFO: Waiting for the Cluster autoscaler-udvnsm/autoscaler-6iuv5f to be deleted
  STEP: Waiting for cluster autoscaler-udvnsm/autoscaler-6iuv5f to be deleted @ 04/09/24 14:53:03.79
  INFO: Error starting logs stream for pod cluster-autoscaler-system/cluster-autoscaler-7658975c8d-2wqrt, container cluster-autoscaler: Get "https://172.18.0.12:10250/containerLogs/cluster-autoscaler-system/cluster-autoscaler-7658975c8d-2wqrt/cluster-autoscaler?follow=true": read tcp 172.18.0.11:55522->172.18.0.12:10250: read: connection reset by peer
  STEP: Deleting namespace used for hosting the "autoscaler" test spec @ 04/09/24 14:53:13.804
  INFO: Deleting namespace autoscaler-udvnsm
• [324.824 seconds]
------------------------------
SS
------------------------------
[SynchronizedAfterSuite] 
/home/runyuzheng/cluster-api/test/e2e/e2e_suite_test.go:176
  STEP: Dumping logs from the bootstrap cluster @ 04/09/24 14:53:13.812
  STEP: Tearing down the management cluster @ 04/09/24 14:53:14.06
  INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-7c8c77b6b7-bgbxb, container manager: context canceled
  INFO: Stopped streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-7c8c77b6b7-bgbxb, container manager: context canceled
  INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-75987c4554-k4v2b, container manager: context canceled
  INFO: Got error while streaming logs for pod capd-system/capd-controller-manager-6c85845db6-2fgnn, container manager: context canceled
  INFO: Stopped streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-75987c4554-k4v2b, container manager: context canceled
  INFO: Stopped streaming logs for pod capd-system/capd-controller-manager-6c85845db6-2fgnn, container manager: context canceled
  INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-55c7446fcb-vsmfv, container manager: context canceled
  INFO: Stopped streaming logs for pod capi-system/capi-controller-manager-55c7446fcb-vsmfv, container manager: context canceled
  INFO: Got error while streaming logs for pod test-extension-system/test-extension-controller-manager-7f48b64796-n5cbq, container manager: context canceled
  INFO: Stopped streaming logs for pod test-extension-system/test-extension-controller-manager-7f48b64796-n5cbq, container manager: context canceled
  INFO: Got error while streaming logs for pod capim-system/capim-controller-manager-6c5889fd87-k6262, container manager: context canceled
  INFO: Stopped streaming logs for pod capim-system/capim-controller-manager-6c5889fd87-k6262, container manager: context canceled
[SynchronizedAfterSuite] PASSED [1.638 seconds]
------------------------------
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
[ReportAfterSuite] PASSED [0.002 seconds]
------------------------------

Ran 1 of 35 Specs in 450.689 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 34 Skipped
PASS

Ginkgo ran 1 suite in 7m38.6302753s
Test Suite Passed
chrischdi commented 5 months ago

I was able to successfullt test it.

$ uname -a
Linux sc2-10-186-132-177 6.5.0-26-generic #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Mar 12 10:22:43 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
$ git rev-parse HEAD
0bac38eddeba3e8e1ea8b9d4c4fc728c61671f23
$ make GINKGO_FOCUS="When using the autoscaler with Cluster API using ClusterClass" test-e2e
...
When using the autoscaler with Cluster API using ClusterClass [ClusterClass] Should create a workload cluster
/home/vmware/go/src/sigs.k8s.io/cluster-api/test/e2e/autoscaler.go:99
  STEP: Creating a namespace for hosting the "autoscaler" test spec @ 04/09/24 17:17:29.567
  INFO: Creating namespace autoscaler-yfsct2
  INFO: Creating event watcher for namespace "autoscaler-yfsct2"
  STEP: Creating a workload cluster @ 04/09/24 17:17:29.576
  INFO: Creating the workload cluster with name "autoscaler-xsr3zt" using the "topology-autoscaler" template (Kubernetes v1.29.2, 1 control-plane machines, 0 worker machines)
  INFO: Getting the cluster template yaml
  INFO: clusterctl config cluster autoscaler-xsr3zt --infrastructure docker --kubernetes-version v1.29.2 --control-plane-machine-count 1 --worker-machine-count 0 --flavor topology-autoscaler
  INFO: Creating the workload cluster with name "autoscaler-xsr3zt" from the provided yaml
  INFO: Applying the cluster template yaml of cluster autoscaler-yfsct2/autoscaler-xsr3zt
Running kubectl apply --kubeconfig /tmp/e2e-kind2858371662 -f -
stderr:

stdout:
clusterclass.cluster.x-k8s.io/quick-start created
dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created
kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created
dockermachinepooltemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinepooltemplate created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-md-default-worker-bootstraptemplate created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-mp-default-worker-bootstraptemplate created
configmap/cni-autoscaler-xsr3zt-crs-0 created
clusterresourceset.addons.cluster.x-k8s.io/autoscaler-xsr3zt-crs-0 created
cluster.cluster.x-k8s.io/autoscaler-xsr3zt created

  INFO: Waiting for the cluster infrastructure of cluster autoscaler-yfsct2/autoscaler-xsr3zt to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 04/09/24 17:17:31.04
  INFO: Waiting for control plane of cluster autoscaler-yfsct2/autoscaler-xsr3zt to be initialized
  INFO: Waiting for the first control plane machine managed by autoscaler-yfsct2/autoscaler-xsr3zt-hdf2t to be provisioned
  STEP: Waiting for one control plane node to exist @ 04/09/24 17:17:41.071
  INFO: Waiting for control plane of cluster autoscaler-yfsct2/autoscaler-xsr3zt to be ready
  INFO: Waiting for control plane autoscaler-yfsct2/autoscaler-xsr3zt-hdf2t to be ready (implies underlying nodes to be ready as well)
  STEP: Waiting for the control plane to be ready @ 04/09/24 17:18:11.093
^[[1;2A  STEP: Checking all the control plane machines are in the expected failure domains @ 04/09/24 17:18:21.108
  INFO: Waiting for the machine deployments of cluster autoscaler-yfsct2/autoscaler-xsr3zt to be provisioned
  STEP: Waiting for the workload nodes to exist @ 04/09/24 17:18:21.126
  STEP: Checking all the machines controlled by autoscaler-xsr3zt-md-0-djdfn are in the "fd4" failure domain @ 04/09/24 17:19:31.231
  INFO: Waiting for the machine pools of cluster autoscaler-yfsct2/autoscaler-xsr3zt to be provisioned
  STEP: Installing the autoscaler on the workload cluster @ 04/09/24 17:19:31.268
  STEP: Creating the autoscaler deployment in the workload cluster @ 04/09/24 17:19:31.268
Running kubectl apply --kubeconfig /tmp/e2e-kubeconfig2484823455 -f -
stderr:

stdout:
namespace/cluster-autoscaler-system created
secret/kubeconfig-management-cluster created
serviceaccount/cluster-autoscaler created
clusterrolebinding.rbac.authorization.k8s.io/cluster-autoscaler-workload created
clusterrole.rbac.authorization.k8s.io/cluster-autoscaler-workload created
deployment.apps/cluster-autoscaler created

  STEP: Wait for the autoscaler deployment and collect logs @ 04/09/24 17:19:31.982
  STEP: Waiting for deployment cluster-autoscaler-system/cluster-autoscaler to be available @ 04/09/24 17:19:31.999
  STEP: Creating workload that forces the system to scale up @ 04/09/24 17:19:42.126
  INFO: Creating log watcher for controller cluster-autoscaler-system/cluster-autoscaler, pod cluster-autoscaler-c8785c699-5ggkc, container cluster-autoscaler
  STEP: Create a scale up deployment with resource requests to force scale up @ 04/09/24 17:19:42.126
  STEP: Create scale up deployment @ 04/09/24 17:19:42.134
  STEP: Wait for the scale up deployment to become ready (this implies machines to be created) @ 04/09/24 17:19:42.278
  STEP: Waiting for deployment default/scale-up to be available @ 04/09/24 17:19:42.279
  STEP: Checking the MachineDeployment is scaled up @ 04/09/24 17:20:12.3
  STEP: Disabling the autoscaler @ 04/09/24 17:20:12.308
  INFO: Dropping the cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size and cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size annotations from the MachineDeployments in ClusterTopology
  INFO: Wait for the annotations to be dropped from the MachineDeployments
  STEP: Checking we can manually scale up the MachineDeployment @ 04/09/24 17:20:22.367
  INFO: Scaling machine deployment topology md-0 to 4 replicas
  INFO: Waiting for correct number of replicas to exist
  STEP: Checking enabling autoscaler will scale down the MachineDeployment to correct size @ 04/09/24 17:20:32.446
  INFO: Add the cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size and cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size annotations to the MachineDeployments in ClusterTopology
  INFO: Wait for the annotations to applied on the MachineDeployments
  STEP: Checking the MachineDeployment is scaled down @ 04/09/24 17:20:42.486
  STEP: PASSED! @ 04/09/24 17:22:12.555
  STEP: Dumping logs from the "autoscaler-xsr3zt" workload cluster @ 04/09/24 17:22:12.555
  STEP: Dumping all the Cluster API resources in the "autoscaler-yfsct2" namespace @ 04/09/24 17:22:13.314
  STEP: Dumping Pods and Nodes of Cluster autoscaler-yfsct2/autoscaler-xsr3zt @ 04/09/24 17:22:13.887
  STEP: Deleting cluster autoscaler-yfsct2/autoscaler-xsr3zt @ 04/09/24 17:22:14.027
  STEP: Deleting cluster autoscaler-yfsct2/autoscaler-xsr3zt @ 04/09/24 17:22:14.036
  INFO: Waiting for the Cluster autoscaler-yfsct2/autoscaler-xsr3zt to be deleted
  STEP: Waiting for cluster autoscaler-yfsct2/autoscaler-xsr3zt to be deleted @ 04/09/24 17:22:14.056
  INFO: Error starting logs stream for pod cluster-autoscaler-system/cluster-autoscaler-c8785c699-5ggkc, container cluster-autoscaler: Get "https://172.18.0.6:10250/containerLogs/cluster-autoscaler-system/cluster-autoscaler-c8785c699-5ggkc
/cluster-autoscaler?follow=true": read tcp 172.18.0.5:36810->172.18.0.6:10250: read: connection reset by peer
  STEP: Deleting namespace used for hosting the "autoscaler" test spec @ 04/09/24 17:22:24.067
  INFO: Deleting namespace autoscaler-yfsct2
• [294.509 seconds]
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[SynchronizedAfterSuite]
/home/vmware/go/src/sigs.k8s.io/cluster-api/test/e2e/e2e_suite_test.go:176
  STEP: Dumping logs from the bootstrap cluster @ 04/09/24 17:22:24.076
  STEP: Tearing down the management cluster @ 04/09/24 17:22:24.257
  INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-7c8c77b6b7-ssbrt, container manager: context canceled
  INFO: Got error while streaming logs for pod capim-system/capim-controller-manager-6c5889fd87-5stnt, container manager: context canceled
  INFO: Stopped streaming logs for pod capim-system/capim-controller-manager-6c5889fd87-5stnt, container manager: context canceled
  INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-75987c4554-9vsn9, container manager: context canceled
  INFO: Stopped streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-75987c4554-9vsn9, container manager: context canceled
  INFO: Stopped streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-7c8c77b6b7-ssbrt, container manager: context canceled
  INFO: Got error while streaming logs for pod capd-system/capd-controller-manager-6c85845db6-4hzrz, container manager: context canceled
  INFO: Stopped streaming logs for pod capd-system/capd-controller-manager-6c85845db6-4hzrz, container manager: context canceled
  INFO: Got error while streaming logs for pod test-extension-system/test-extension-controller-manager-7f48b64796-qzq8g, container manager: context canceled
  INFO: Stopped streaming logs for pod test-extension-system/test-extension-controller-manager-7f48b64796-qzq8g, container manager: context canceled
  INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-55c7446fcb-zwfc5, container manager: context canceled
  INFO: Stopped streaming logs for pod capi-system/capi-controller-manager-55c7446fcb-zwfc5, container manager: context canceled
[SynchronizedAfterSuite] PASSED [1.979 seconds]
------------------------------
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
[ReportAfterSuite] PASSED [0.003 seconds]
------------------------------

Ran 1 of 35 Specs in 396.267 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 34 Skipped
PASS

Ginkgo ran 1 suite in 6m41.273196618s
Test Suite Passed
fabriziopandini commented 5 months ago

/priority important-longterm