kubernetes-sigs / cluster-api-provider-ibmcloud

Cluster API Provider for IBM Cloud
https://cluster-api-ibmcloud.sigs.k8s.io
Apache License 2.0
62 stars 84 forks source link

Use cloud-provider template for PowerVS e2e by default #1861

Closed Amulyam24 closed 5 months ago

Amulyam24 commented 5 months ago

What this PR does / why we need it: 3CP + 1W cluster creation has been failing. 1CP and 1W are created successfully but the other 2 CP creation is not triggered due to a condition check failing on first CP

Machine amulya-test-control-plane-t94ht reports EtcdMemberHealthy condition is unknown (Failed to connect to the etcd pod on the amulya-test-control-plane-t94ht node: could not establish a connection to any etcd node: unable to create etcd client: context deadline exceeded)

On further debugging, it was discovered the nodes were not assigned IP addresses and this was due to the change introduced in https://github.com/kubernetes/kubernetes/pull/121028 in k8s v 1.29

# kubectl get nodes -o wide
NAME                              STATUS     ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE          KERNEL-VERSION               CONTAINER-RUNTIME
amulya-test-control-plane-qnfmp   NotReady   control-plane   22h   v1.29.3   <none>        <none>        CentOS Stream 8   4.18.0-552.1.1.el8.ppc64le   containerd://1.7.13
amulya-test-md-0-2j477-dvzh8      NotReady   <none>          22h   v1.29.3   <none>        <none>        CentOS Stream 8   4.18.0-552.1.1.el8.ppc64le   containerd://1.7.13

As a result, switch to using cloud provider template for PowerVS CI which will be responsible for setting the IP.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

Special notes for your reviewer:

/area provider/ibmcloud

  1. Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.

Release note:

Use cloud-provider template for PowerVS e2e by default
netlify[bot] commented 5 months ago

Deploy Preview for kubernetes-sigs-cluster-api-ibmcloud ready!

Name Link
Latest commit 681a1b89cb4acbaadb4d1e04b5f6063f001461e7
Latest deploy log https://app.netlify.com/sites/kubernetes-sigs-cluster-api-ibmcloud/deploys/66826a2ecfcd76000832bd5f
Deploy Preview https://deploy-preview-1861--kubernetes-sigs-cluster-api-ibmcloud.netlify.app
Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Amulyam24 commented 5 months ago

/hold

Verified the template by testing locally

IBMPOWERVS_SSHKEY_NAME="amulya-ssh-pubkey" \
IBMPOWERVS_VIP="192.168.169.206" \
IBMPOWERVS_VIP_EXTERNAL="163.68.77.206" \
IBMPOWERVS_VIP_CIDR="29" \
IBMPOWERVS_IMAGE_NAME="capibm-powervs-centos-streams8-1-29-3" \
IBMPOWERVS_SERVICE_INSTANCE_ID="10b1000b-da8d-4e18-ad1f-6b2a56a8c130" \
IBMPOWERVS_NETWORK_NAME="karthik-capi-test" \
IBMACCOUNT_ID="c265c8cefda241ca9c107adcbbacaa84" \
IBMPOWERVS_REGION="osa" \
IBMPOWERVS_ZONE="osa21" \
BASE64_API_KEY=$(echo -n $IBMCLOUD_API_KEY | base64) \
clusterctl generate cluster capi-new --kubernetes-version v1.29.3 \
--target-namespace default \
--control-plane-machine-count=3 \
--worker-machine-count=1 \
--from=./test/e2e/data/templates/cluster-template-powervs-md-remediation.yaml | kubectl apply -f -
configmap/cloud-controller-manager-addon unchanged
configmap/ibmpowervs-cfg configured
secret/ibmpowervs-credential configured
clusterresourceset.addons.cluster.x-k8s.io/crs-cloud-conf unchanged
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capi-new-md-0 created
cluster.cluster.x-k8s.io/capi-new created
machinedeployment.cluster.x-k8s.io/capi-new-md-0 created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capi-new-control-plane created
ibmpowervscluster.infrastructure.cluster.x-k8s.io/capi-new created
ibmpowervsmachinetemplate.infrastructure.cluster.x-k8s.io/capi-new-control-plane created
ibmpowervsmachinetemplate.infrastructure.cluster.x-k8s.io/capi-new-md-0 created

 % kubectl get machines
NAME                           CLUSTER    NODENAME                       PROVIDERID                                                                                         PHASE     AGE     VERSION
capi-new-control-plane-5krr9   capi-new   capi-new-control-plane-5krr9   ibmpowervs://osa/osa21/10b1000b-da8d-4e18-ad1f-6b2a56a8c130/1791e16f-ef7c-45bd-bc0f-a093afef8da4   Running   8m16s   v1.29.3
capi-new-control-plane-lkrtm   capi-new   capi-new-control-plane-lkrtm   ibmpowervs://osa/osa21/10b1000b-da8d-4e18-ad1f-6b2a56a8c130/264c4a94-5388-4c36-8729-831716d773d7   Running   27m     v1.29.3
capi-new-control-plane-vqs25   capi-new   capi-new-control-plane-vqs25   ibmpowervs://osa/osa21/10b1000b-da8d-4e18-ad1f-6b2a56a8c130/b3466d2b-50d4-47ce-be32-fce86e81ca18   Running   17m     v1.29.3
capi-new-md-0-4gjv2-8n65n      capi-new   capi-new-md-0-4gjv2-8n65n      ibmpowervs://osa/osa21/10b1000b-da8d-4e18-ad1f-6b2a56a8c130/749eb04c-8684-4725-8b29-19a26466ec1b   Running   28m     v1.29.3

However, waiting for a successful run in CI(facing few issues, not related to this change)

k8s-ci-robot commented 5 months ago

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Amulyam24, mkumatag

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files: - ~~[OWNERS](https://github.com/kubernetes-sigs/cluster-api-provider-ibmcloud/blob/main/OWNERS)~~ [mkumatag] Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
Amulyam24 commented 5 months ago
Running Suite: capibm-e2e - /home/prow/go/src/github.com/Amulyam24/cluster-api-provider-ibmcloud/test/e2e
=========================================================================================================
Random Seed: 1719818456

Will run 2 of 3 specs
------------------------------
[SynchronizedBeforeSuite]
/home/prow/go/src/github.com/Amulyam24/cluster-api-provider-ibmcloud/test/e2e/suite_test.go:103
  STEP: Initializing a runtime.Scheme with all the GVK relevant for this test @ 07/01/24 07:27:03.933
  STEP: Loading the e2e test configuration from "/home/prow/go/src/github.com/Amulyam24/cluster-api-provider-ibmcloud/test/e2e/config/ibmcloud-e2e-envsubst.yaml" @ 07/01/24 07:27:03.935
  STEP: Creating a clusterctl local repository into "/logs/artifacts" @ 07/01/24 07:27:03.937
  STEP: Reading the ClusterResourceSet manifest /home/prow/go/src/github.com/Amulyam24/cluster-api-provider-ibmcloud/test/e2e/data/cni/calico/calico.yaml @ 07/01/24 07:27:03.937
  STEP: Setting up the bootstrap cluster @ 07/01/24 07:27:08.252
  INFO: Creating a kind cluster with name "capibm-e2e"
Creating cluster "capibm-e2e" ...
 β€’ Ensuring node image (kindest/node:v1.30.0) πŸ–Ό  ...
 βœ“ Ensuring node image (kindest/node:v1.30.0) πŸ–Ό
 β€’ Preparing nodes πŸ“¦   ...
 βœ“ Preparing nodes πŸ“¦
 β€’ Writing configuration πŸ“œ  ...
 βœ“ Writing configuration πŸ“œ
 β€’ Starting control-plane πŸ•ΉοΈ  ...
 βœ“ Starting control-plane πŸ•ΉοΈ
 β€’ Installing CNI πŸ”Œ  ...
 βœ“ Installing CNI πŸ”Œ
 β€’ Installing StorageClass πŸ’Ύ  ...
 βœ“ Installing StorageClass πŸ’Ύ
  INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind278575388
  INFO: Loading image: "gcr.io/k8s-staging-capi-ibmcloud/cluster-api-ibmcloud-controller:e2e"
  INFO: Image gcr.io/k8s-staging-capi-ibmcloud/cluster-api-ibmcloud-controller:e2e is present in local container image cache
  STEP: Initializing the bootstrap cluster @ 07/01/24 07:28:26.455
  INFO: clusterctl init --config /logs/artifacts/repository/clusterctl-config.yaml --kubeconfig /tmp/e2e-kind278575388 --wait-providers --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure ibmcloud
  INFO: Waiting for provider controllers to be running
  STEP: Waiting for deployment capi-ibmcloud-system/capi-ibmcloud-controller-manager to be available @ 07/01/24 07:28:56.121
  INFO: Creating log watcher for controller capi-ibmcloud-system/capi-ibmcloud-controller-manager, pod capi-ibmcloud-controller-manager-6fd77b74f8-jgtjh, container manager
  INFO: Creating log watcher for controller capi-ibmcloud-system/capi-ibmcloud-controller-manager, pod capi-ibmcloud-controller-manager-6fd77b74f8-jgtjh, container kube-rbac-proxy
  STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available @ 07/01/24 07:28:56.242
  INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-659b5fb778-r9fr7, container manager
  STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available @ 07/01/24 07:28:56.254
  INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-588f8c45cf-z8fzp, container manager
  STEP: Waiting for deployment capi-system/capi-controller-manager to be available @ 07/01/24 07:28:56.264
  INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-7ff88d7c65-ndxbg, container manager
[SynchronizedBeforeSuite] PASSED [112.596 seconds]
------------------------------
S
------------------------------
Workload cluster creation Creating a single control-plane cluster Should create a cluster with 1 worker node and can be scaled
/home/prow/go/src/github.com/Amulyam24/cluster-api-provider-ibmcloud/test/e2e/e2e_test.go:91
  STEP: Creating a namespace for hosting the "create-workload-cluster" test spec @ 07/01/24 07:28:56.53
  INFO: Creating namespace create-workload-cluster-i6c4np
  INFO: Creating event watcher for namespace "create-workload-cluster-i6c4np"
  STEP: Initializing with 1 worker node @ 07/01/24 07:28:56.541
  INFO: Creating the workload cluster with name "capibm-e2e-6vwd46" using the "powervs-md-remediation" template (Kubernetes v1.29.3, 1 control-plane machines, 1 worker machines)
  INFO: Getting the cluster template yaml
  INFO: clusterctl config cluster capibm-e2e-6vwd46 --infrastructure (default) --kubernetes-version v1.29.3 --control-plane-machine-count 1 --worker-machine-count 1 --flavor powervs-md-remediation
  INFO: Creating the workload cluster with name "capibm-e2e-6vwd46" from the provided yaml
  INFO: Applying the cluster template yaml of cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46
Running kubectl apply --kubeconfig /tmp/e2e-kind278575388 -f -
stdout:
configmap/cloud-controller-manager-addon created
configmap/ibmpowervs-cfg created
secret/ibmpowervs-credential created
clusterresourceset.addons.cluster.x-k8s.io/crs-cloud-conf created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capibm-e2e-6vwd46-md-0 created
cluster.cluster.x-k8s.io/capibm-e2e-6vwd46 created
machinedeployment.cluster.x-k8s.io/capibm-e2e-6vwd46-md-0 created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capibm-e2e-6vwd46-control-plane created
ibmpowervscluster.infrastructure.cluster.x-k8s.io/capibm-e2e-6vwd46 created
ibmpowervsmachinetemplate.infrastructure.cluster.x-k8s.io/capibm-e2e-6vwd46-control-plane created
ibmpowervsmachinetemplate.infrastructure.cluster.x-k8s.io/capibm-e2e-6vwd46-md-0 created

  INFO: Waiting for the cluster infrastructure of cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46 to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 07/01/24 07:28:58.682
  INFO: Waiting for control plane of cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46 to be initialized
  INFO: Waiting for the first control plane machine managed by create-workload-cluster-i6c4np/capibm-e2e-6vwd46-control-plane to be provisioned
  STEP: Waiting for one control plane node to exist @ 07/01/24 07:29:13.183
  INFO: Installing a CNI plugin to the workload cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46
Running kubectl apply --kubeconfig /tmp/e2e-kubeconfig2530388303 -f -
stdout:
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

  INFO: Waiting for control plane of cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46 to be ready
  INFO: Waiting for control plane create-workload-cluster-i6c4np/capibm-e2e-6vwd46-control-plane to be ready (implies underlying nodes to be ready as well)
  STEP: Waiting for the control plane to be ready @ 07/01/24 07:42:28.885
  STEP: Checking all the control plane machines are in the expected failure domains @ 07/01/24 07:44:28.965
  INFO: Waiting for the machine deployments of cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46 to be provisioned
  STEP: Waiting for the workload nodes to exist @ 07/01/24 07:44:28.977
  STEP: Checking all the machines controlled by capibm-e2e-6vwd46-md-0 are in the "<None>" failure domain @ 07/01/24 07:47:59.209
  INFO: Waiting for the machine pools of cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46 to be provisioned
  STEP: Scaling worker node to 3 @ 07/01/24 07:47:59.222
  INFO: Creating the workload cluster with name "capibm-e2e-6vwd46" using the "powervs-md-remediation" template (Kubernetes v1.29.3, 1 control-plane machines, 3 worker machines)
  INFO: Getting the cluster template yaml
  INFO: clusterctl config cluster capibm-e2e-6vwd46 --infrastructure (default) --kubernetes-version v1.29.3 --control-plane-machine-count 1 --worker-machine-count 3 --flavor powervs-md-remediation
  INFO: Creating the workload cluster with name "capibm-e2e-6vwd46" from the provided yaml
  INFO: Applying the cluster template yaml of cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46
Running kubectl apply --kubeconfig /tmp/e2e-kind278575388 -f -
stdout:
configmap/cloud-controller-manager-addon unchanged
configmap/ibmpowervs-cfg unchanged
secret/ibmpowervs-credential configured
clusterresourceset.addons.cluster.x-k8s.io/crs-cloud-conf unchanged
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capibm-e2e-6vwd46-md-0 configured
cluster.cluster.x-k8s.io/capibm-e2e-6vwd46 unchanged
machinedeployment.cluster.x-k8s.io/capibm-e2e-6vwd46-md-0 configured
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capibm-e2e-6vwd46-control-plane configured
ibmpowervscluster.infrastructure.cluster.x-k8s.io/capibm-e2e-6vwd46 unchanged
ibmpowervsmachinetemplate.infrastructure.cluster.x-k8s.io/capibm-e2e-6vwd46-control-plane unchanged
ibmpowervsmachinetemplate.infrastructure.cluster.x-k8s.io/capibm-e2e-6vwd46-md-0 unchanged

  INFO: Waiting for the cluster infrastructure of cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46 to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 07/01/24 07:48:01.896
  INFO: Waiting for control plane of cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46 to be initialized
  INFO: Waiting for the first control plane machine managed by create-workload-cluster-i6c4np/capibm-e2e-6vwd46-control-plane to be provisioned
  STEP: Waiting for one control plane node to exist @ 07/01/24 07:48:01.915
  INFO: Installing a CNI plugin to the workload cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46
Running kubectl apply --kubeconfig /tmp/e2e-kubeconfig957715167 -f -
stdout:
poddisruptionbudget.policy/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged
serviceaccount/calico-node unchanged
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node configured
deployment.apps/calico-kube-controllers unchanged

  INFO: Waiting for control plane of cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46 to be ready
  INFO: Waiting for control plane create-workload-cluster-i6c4np/capibm-e2e-6vwd46-control-plane to be ready (implies underlying nodes to be ready as well)
  STEP: Waiting for the control plane to be ready @ 07/01/24 07:48:14.468
  STEP: Checking all the control plane machines are in the expected failure domains @ 07/01/24 07:48:14.476
  INFO: Waiting for the machine deployments of cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46 to be provisioned
  STEP: Waiting for the workload nodes to exist @ 07/01/24 07:48:14.489
  STEP: Checking all the machines controlled by capibm-e2e-6vwd46-md-0 are in the "<None>" failure domain @ 07/01/24 07:58:15.233
  INFO: Waiting for the machine pools of cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46 to be provisioned
  STEP: Dumping logs from the "capibm-e2e-6vwd46" workload cluster @ 07/01/24 07:58:15.248
Unable to get logs for workload Cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46: log collector is nil.
  STEP: Dumping all the Cluster API resources in the "create-workload-cluster-i6c4np" namespace @ 07/01/24 07:58:15.248
  STEP: Deleting all clusters in the create-workload-cluster-i6c4np namespace @ 07/01/24 07:58:15.513
  STEP: Deleting cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46 @ 07/01/24 07:58:15.52
  INFO: Waiting for the Cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46 to be deleted
  STEP: Waiting for cluster create-workload-cluster-i6c4np/capibm-e2e-6vwd46 to be deleted @ 07/01/24 07:58:15.532
  STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec @ 07/01/24 07:58:35.546
  INFO: Deleting namespace create-workload-cluster-i6c4np
β€’ [1779.025 seconds]
------------------------------
Workload cluster creation Creating a highly available control-plane cluster Should create a cluster with 3 control-plane nodes and 1 worker node
/home/prow/go/src/github.com/Amulyam24/cluster-api-provider-ibmcloud/test/e2e/e2e_test.go:137
  STEP: Creating a namespace for hosting the "create-workload-cluster" test spec @ 07/01/24 07:58:35.556
  INFO: Creating namespace create-workload-cluster-jojhjh
  INFO: Creating event watcher for namespace "create-workload-cluster-jojhjh"
  STEP: Creating a high available cluster @ 07/01/24 07:58:35.569
  INFO: Creating the workload cluster with name "capibm-e2e-jzi6ex" using the "powervs-md-remediation" template (Kubernetes v1.29.3, 3 control-plane machines, 1 worker machines)
  INFO: Getting the cluster template yaml
  INFO: clusterctl config cluster capibm-e2e-jzi6ex --infrastructure (default) --kubernetes-version v1.29.3 --control-plane-machine-count 3 --worker-machine-count 1 --flavor powervs-md-remediation
  INFO: Creating the workload cluster with name "capibm-e2e-jzi6ex" from the provided yaml
  INFO: Applying the cluster template yaml of cluster create-workload-cluster-jojhjh/capibm-e2e-jzi6ex
Running kubectl apply --kubeconfig /tmp/e2e-kind278575388 -f -
stdout:
configmap/cloud-controller-manager-addon created
configmap/ibmpowervs-cfg created
secret/ibmpowervs-credential created
clusterresourceset.addons.cluster.x-k8s.io/crs-cloud-conf created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capibm-e2e-jzi6ex-md-0 created
cluster.cluster.x-k8s.io/capibm-e2e-jzi6ex created
machinedeployment.cluster.x-k8s.io/capibm-e2e-jzi6ex-md-0 created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capibm-e2e-jzi6ex-control-plane created
ibmpowervscluster.infrastructure.cluster.x-k8s.io/capibm-e2e-jzi6ex created
ibmpowervsmachinetemplate.infrastructure.cluster.x-k8s.io/capibm-e2e-jzi6ex-control-plane created
ibmpowervsmachinetemplate.infrastructure.cluster.x-k8s.io/capibm-e2e-jzi6ex-md-0 created

  INFO: Waiting for the cluster infrastructure of cluster create-workload-cluster-jojhjh/capibm-e2e-jzi6ex to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 07/01/24 07:58:36.358
  INFO: Waiting for control plane of cluster create-workload-cluster-jojhjh/capibm-e2e-jzi6ex to be initialized
  INFO: Waiting for the first control plane machine managed by create-workload-cluster-jojhjh/capibm-e2e-jzi6ex-control-plane to be provisioned
  STEP: Waiting for one control plane node to exist @ 07/01/24 07:59:16.388
  INFO: Installing a CNI plugin to the workload cluster create-workload-cluster-jojhjh/capibm-e2e-jzi6ex
Running kubectl apply --kubeconfig /tmp/e2e-kubeconfig2152848949 -f -
stdout:
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

  INFO: Waiting for control plane of cluster create-workload-cluster-jojhjh/capibm-e2e-jzi6ex to be ready
  INFO: Waiting for the remaining control plane machines managed by create-workload-cluster-jojhjh/capibm-e2e-jzi6ex-control-plane to be provisioned
  STEP: Waiting for all control plane nodes to exist @ 07/01/24 08:09:19.868
  INFO: Waiting for control plane create-workload-cluster-jojhjh/capibm-e2e-jzi6ex-control-plane to be ready (implies underlying nodes to be ready as well)
  STEP: Waiting for the control plane to be ready @ 07/01/24 08:28:20.813
  STEP: Checking all the control plane machines are in the expected failure domains @ 07/01/24 08:30:00.886
  INFO: Waiting for the machine deployments of cluster create-workload-cluster-jojhjh/capibm-e2e-jzi6ex to be provisioned
  STEP: Waiting for the workload nodes to exist @ 07/01/24 08:30:00.902
  STEP: Checking all the machines controlled by capibm-e2e-jzi6ex-md-0 are in the "<None>" failure domain @ 07/01/24 08:30:00.913
  INFO: Waiting for the machine pools of cluster create-workload-cluster-jojhjh/capibm-e2e-jzi6ex to be provisioned
  STEP: Dumping logs from the "capibm-e2e-jzi6ex" workload cluster @ 07/01/24 08:30:00.929
Unable to get logs for workload Cluster create-workload-cluster-jojhjh/capibm-e2e-jzi6ex: log collector is nil.
  STEP: Dumping all the Cluster API resources in the "create-workload-cluster-jojhjh" namespace @ 07/01/24 08:30:00.93
  STEP: Deleting all clusters in the create-workload-cluster-jojhjh namespace @ 07/01/24 08:30:01.446
  STEP: Deleting cluster create-workload-cluster-jojhjh/capibm-e2e-jzi6ex @ 07/01/24 08:30:01.456
  INFO: Waiting for the Cluster create-workload-cluster-jojhjh/capibm-e2e-jzi6ex to be deleted
  STEP: Waiting for cluster create-workload-cluster-jojhjh/capibm-e2e-jzi6ex to be deleted @ 07/01/24 08:30:01.472
  STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec @ 07/01/24 08:30:41.502
  INFO: Deleting namespace create-workload-cluster-jojhjh
β€’ [1925.955 seconds]
------------------------------
[SynchronizedAfterSuite]
/home/prow/go/src/github.com/Amulyam24/cluster-api-provider-ibmcloud/test/e2e/suite_test.go:150
  STEP: Tearing down the management cluster @ 07/01/24 08:30:41.511
[SynchronizedAfterSuite] PASSED [1.338 seconds]
------------------------------

Ran 2 of 3 Specs in 3818.917 seconds
SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 1 Skipped
PASS

E2E succeeded, minor change is needed in e2e script. Will create a different PR which will have to be back ported.

/unhold