Closed PatrickLaabs closed 2 months ago
I also tried the 0.1.0 Version of the Provider, with the rke2 bootstrap and controlplane version of v.0.2.0, but with these Versions I am not even getting to the point of a VM-creation inside my Harvester installation 😢
Ok, after some investigation, it seems to work. I had to do some tweaking on my Harvester Instance (Since I am running a Single-Node-Cluster).
It seems just to took a while for Calico to recognize the IP of the Service.
Another Update on this one:
I figured out, that using the latest Versions of the rke2 ControlPlane and Bootstrap Provider (Version 0.3.0) is causing some issues on setting the ProviderIDs for the creates nodes. Using the Version 0.2.2 adds the ProviderIDs.
I created a new Cluster with these Versions. The Tigera-Operator may still fail at the beginning, but I just let him do its thing for a while now and restartet the pod. Now its working.
Hell, i love this :D Maybe someone can help me on that..
Now that the Tigera Operator seems to work, the
calico-system calico-node-x7p65 ● 0/1 Init:CrashLoopBackOff
keeps on CrashLoopBackOff on my test-rk-cp-machine-ljlrl
node.
on the other node test-rk-workers-cjtjl-mlwgr
it is up and running..
calico-system calico-node-kxv5l ● 1/1 Running
install-cni 2024-06-07 13:04:23.740 [INFO][1] cni-installer/<nil> <nil>: /host/secondary-bin-dir is not writeable, skipping
install-cni W0607 13:04:23.740567 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
install-cni 2024-06-07 13:04:23.749 [ERROR][1] cni-installer/<nil> <nil>: Unable to create token for CNI kubeconfig error=Post "https://10.43.0.1:443/api/v1/namespaces/calico-system/servi
ceaccounts/calico-node/token": dial tcp 10.43.0.1:443: connect: network is unreachable
After some investigation, I can tell, that it was a Layer8 Problem.. I just under-sized my ControlPlane deployment.
What happened: When I try to provision a Kubernetes Cluster - as described in the README.md - my ControlPlane becomes ready, and I am able to connect to the new cluster with the kubeconfig.
Some pods are starting, but the important one 'tiger-operator' keeps restarting:
What did you expect to happen:
How to reproduce it:
Anything else you would like to add: I am currently trying to deploy a 1-ControlPlane Cluster on Harvester, but the Error also occours, when i try to deploy 1-2 ControlPlanes with 1 Worker Node.
Environment:
/etc/os-release
): macOS