Closed cameronbraid closed 3 years ago
using kubespray from master
I'm using a different old version and I have the same issue. Any workaround @cameronbraid ?
The workaround is to edit kube-apiserver.yaml
by hand
I added the same variables with kubespray 2.11.0 and it worked without issues when running the upgrade
playbook with the following variables:
kube_kubeadm_apiserver_extra_args:
service-account-issuer: kubernetes.default.svc
service-account-signing-key-file: /etc/kubernetes/ssl/sa.key
Difference in apiserver ymls, change only ran in DEV, INT shows setup without the variables:
$ ansible -i inventories/xyz-dev/hosts.ini kube-master -m shell -a 'grep service-account- /etc/kubernetes/manifests/kube-apiserver.yaml'
dev-node-1 | CHANGED | rc=0 >>
- --service-account-issuer=kubernetes.default.svc
- --service-account-key-file=/etc/kubernetes/ssl/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key
dev-node-2 | CHANGED | rc=0 >>
- --service-account-issuer=kubernetes.default.svc
- --service-account-key-file=/etc/kubernetes/ssl/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key
dev-node-3 | CHANGED | rc=0 >>
- --service-account-issuer=kubernetes.default.svc
- --service-account-key-file=/etc/kubernetes/ssl/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key
$ ansible -i inventories/xyz-int/hosts.ini kube-master -m shell -a 'grep service-account- /etc/kubernetes/manifests/kube-apiserver.yaml'
int-node-1 | CHANGED | rc=0 >>
- --service-account-key-file=/etc/kubernetes/ssl/sa.pub
int-node-2 | CHANGED | rc=0 >>
- --service-account-key-file=/etc/kubernetes/ssl/sa.pub
int-node-3 | CHANGED | rc=0 >>
- --service-account-key-file=/etc/kubernetes/ssl/sa.pub
The pods were also reloaded correctly.
So it might be that this bug appeared in more recent versions but I can't confirm this as I didn't test it.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
I added the same variables with kubespray 2.11.0 and it worked without issues when running the
upgrade
playbook with the following variables:kube_kubeadm_apiserver_extra_args: service-account-issuer: kubernetes.default.svc service-account-signing-key-file: /etc/kubernetes/ssl/sa.key
Difference in apiserver ymls, change only ran in DEV, INT shows setup without the variables:
$ ansible -i inventories/xyz-dev/hosts.ini kube-master -m shell -a 'grep service-account- /etc/kubernetes/manifests/kube-apiserver.yaml' dev-node-1 | CHANGED | rc=0 >> - --service-account-issuer=kubernetes.default.svc - --service-account-key-file=/etc/kubernetes/ssl/sa.pub - --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key dev-node-2 | CHANGED | rc=0 >> - --service-account-issuer=kubernetes.default.svc - --service-account-key-file=/etc/kubernetes/ssl/sa.pub - --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key dev-node-3 | CHANGED | rc=0 >> - --service-account-issuer=kubernetes.default.svc - --service-account-key-file=/etc/kubernetes/ssl/sa.pub - --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key $ ansible -i inventories/xyz-int/hosts.ini kube-master -m shell -a 'grep service-account- /etc/kubernetes/manifests/kube-apiserver.yaml' int-node-1 | CHANGED | rc=0 >> - --service-account-key-file=/etc/kubernetes/ssl/sa.pub int-node-2 | CHANGED | rc=0 >> - --service-account-key-file=/etc/kubernetes/ssl/sa.pub int-node-3 | CHANGED | rc=0 >> - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
The pods were also reloaded correctly.
So it might be that this bug appeared in more recent versions but I can't confirm this as I didn't test it.
Some clarification for anyone who stumbles across this in the future. As the original submitter indicates setting kube_kubeadm_apiserver_extra_args
then running playbook.yml
appears to have no effect.
However I noticed this response specifically says everything worked as expected when using the upgrade-cluster.yml
playbook, which based on my testing with Kubespray 2.15 does update /etc/kubernetes/manifests/kube-apiserver.yaml
.
TL;DR - You have to use upgrade-cluster.yml
to apply kube_kubeadm_apiserver_extra_args
.
Thank you for your helpful comments! I'm trying to add --audit-webhook-config-file
to the kube-apiserver.yaml
with kubespray 2.22 and yep, it works only with upgrade-cluster.yml
playbook.
I initially deployed the cluster without any
kube_kubeadm_apiserver_extra_args
definedI then added in
group_vars/k8s-cluster/k8s-cluster.yml
:and re-ran the
cluster.yaml
playbookI can see the args are being set in
kubeadm-config.yaml
But the
kube-apiserver.yaml
isn't updated