Closed bogd closed 2 months ago
Hmm.. It seems followng error is root cause.
can not mix '--config' with arguments [allow-experimental-upgrades certificate-renewal etcd-upgrade force yes]
I think we need fix kubeadm-upgrade.yml.
I think we need to upgrade the kubeadm configration from v1beta3 to v1beta4, and configure UpgradeApplyConfiguration instead of arguments. https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/
But it seems that the v1beta4 is not supported yet.
I asked a question at https://github.com/kubernetes/kubeadm/issues/3084#issuecomment-2209123104
I got a answer https://github.com/kubernetes/kubeadm/issues/3084#issuecomment-2209300846
I think we need to remove --config
option from kubeadm upgrade
.
Do you all have any concerns to remove the option?
I opened a PR to fix this.
Kubespray master
is broken because of this issue.
I confirm the same issue with master at commit dd51ef6f.
The fix from @tmurakam worked fine for me.
I can also confirm the PR mentioned above works. I created a new cluster this morning, and then upgraded it after.
The referenced PR has the side effect that the variables that are modified in the kubeadm-config file are not reflected in the manifests anymore. Example: modify the kube_scheduler_bind variable in the playbook. The variable is correctly set in the kubeadm-config.yaml file, but the according kube-scheduler.yaml manifest is not modified, thus configuration is not applied.
This is, in my opinion, a regression.
@ArnCo I think we can't change configuration on upgrading anymore because kubeadm does not accept kubeadm-config.yaml file on upgrading. If we want to change the configuration, I think we need to run kubespray with new configurations without upgrading first, then upgrade the cluster without configuration changes. Please let me know if there is a better way.
@tmurakam Well I'm fiddling with our cluster right now. It seems that the kubeadm upgrade command was not meant to reconfigure the cluster, my bad.
To apply the changes to our cluster, I backed-up the /etc/kubernetes folder and run
kubeadm init phase control-plane scheduler --config /etc/kubernetes/kubeadm-config.yaml
This had the effect to update the manifests and consequently my changes. Right now, I think that Kubespray does not execute kubeadm init if the manifest files already exist.
What happened?
Attempted to upgrade a cluster from v1.29.3 to v1.30.2. The upgrade playbook fails on
kubeadm upgrade apply
, with errorcan not mix '--config' with arguments [allow-experimental-upgrades certificate-renewal etcd-upgrade force yes]
, in this task:What did you expect to happen?
Successful upgrade of the cluster
How can we reproduce it (as minimally and precisely as possible)?
Attempt to upgrade cluster from v.1.29 to v1.30
OS
Version of Ansible
Version of Python
Version of Kubespray (commit)
474b259cf
Network plugin used
calico
Full inventory with variables
[ Removed, since it was huge and was making the issue difficult to read. Will provide a gist on request, if needed ]
Command used to invoke ansible
ansible-playbook on custom playbook that imports kubespray/playbooks/upgrade_cluster.yml
Output of ansible run
Anything else we need to know
No response