Open PatrickLaabs opened 3 months ago
Ok, the Problem was the Version v0.3.0 of the RKE2 Bootstrap ControlPlane Providers.
Running a fresh bootstrap process with this:
clusterctl init --infrastructure harvester:v0.1.2 --control-plane rke2:v0.2.2 --bootstrap rke2:v0.2.2
Sets the ProviderID on the harvestermachines.
Is your Cloud Provider pod coming up or health at all ? Provider ID needs to be set on the Node(s) for CAPHV I think to get it.
Hey, I guess it was up and running and also healthy. But I do a double-check later.
I can definitely tell, that the harvester-cloud-provider may not be the cause of the issue, because it worked as expected as soon as i downgraded from v0.3.0 to v0.2.7 (rke2 controlplane and bootstrap)
https://github.com/rancher-sandbox/cluster-api-provider-rke2/issues/345#issuecomment-2160281114
Havn't tried this one out yet. Does someone have a harvester cluster at hand for this testing? I am currently doing some heavy testing with mine - and don't want to re-do all the deployment work again 😄
@PatrickLaabs I can provide you with some terraform stuff to get a running Harvester cluster on Equinix!
What happened: When provisioning a Cluster - following the Guide - to Harvester, the nodes (1x ControlPlane, 2x Worker) are getting created and the LoadBalancer Instance, too.
The CAPHV-Controller keeps telling me:
The ControlPlane has become ready, but the workers not. The ControlPlane does have a ProviderID - when viewing the harvestercluster Ressource.
What did you expect to happen:
I'd did expect a ProviderID being set to the workers, so that my Cluster becomes ready.
How to reproduce it:
Anything else you would like to add:
I also tried to restart the controller pods on my local kind-cluster, but that didn't helped.
I wonder, if this still comes from the Tigera-Operator deployment. Even if the pod
is running and healthy, the pod seems to throws some "errors":
Environment:
/etc/os-release
): macOS m1