Open codersergg opened 1 month ago
Hi @codersergg Thanks for your feedback. Could you please help to provide more information about
Also, this workaround may help.
rollback
to false to keep the instance running even AutoK3s failed to deploy the K3s cluster for debugging reasons.
autok3s -d create -p aws \
... \
--master 1 \
--cloud-controller-manager \
--iam-instance-profile-control my-iam-policy-for-control-plane \
--iam-instance-profile-worker my-iam-policy-for-node \
--rollback false
aws-cloud-controller-manager
pod from the AWS instance to check if there are logs like
routecontroller.go:52] Couldn't reconcile node routes: error listing routes: unable to find route table for AWS cluster: kubernetes
kubectl edit
to add arg --configure-cloud-routes=false
for the aws-cloud-controller-manager
DaemonsetFollowing the instructions, I disabled configure-cloud-routes
, and after that, I was able to successfully join the worker node.
This looks a bit odd. Why wasn't cloud routes configured automatically?
I'm glad that the workaround helps and I can confirm this is a bug after upgrading aws-cloud-provider to v1.27.1.
To use cloud routes, we need to tag the VPC route table with the tag kubernetes.io/cluster/<cluster-id>
This looks a bit odd. Why wasn't cloud routes configured automatically?
Yes. The cloud routes need to be configured automatically. We will fix this in the next version.
I successfully deploy the cluster using the UI interface. I also successfully deploy the cluster using the generated CLI command. However, when I add the following settings to the command:
after configuring the policies, I receive an error: "Job for k3s-agent.service failed because the control process exited with error code."
Setting the flag --disable=servicelb does not resolve the issue.