kubernetes-sigs / kubespray

Deploy a Production Ready Kubernetes Cluster
Apache License 2.0
16.17k stars 6.48k forks source link

Changes to kube_kubeadm_apiserver_extra_args arent updated in kube-apiserver.yaml pod #6004

Closed cameronbraid closed 3 years ago

cameronbraid commented 4 years ago

I initially deployed the cluster without any kube_kubeadm_apiserver_extra_args defined

I then added in group_vars/k8s-cluster/k8s-cluster.yml :

kube_kubeadm_apiserver_extra_args: 
  service-account-issuer: kubernetes.default.svc
  service-account-signing-key-file: /etc/kubernetes/ssl/sa.key

and re-ran the cluster.yaml playbook

I can see the args are being set in kubeadm-config.yaml

...
apiServer:
  extraArgs:
    anonymous-auth: "True"
    authorization-mode: Node,RBAC
    bind-address: 0.0.0.0
    insecure-port: "0"
    apiserver-count: "3"
    endpoint-reconciler-type: lease
    service-node-port-range: 30000-32767
    kubelet-preferred-address-types: "InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP"
    profiling: "False"
    request-timeout: "1m0s"
    enable-aggregator-routing: "False"
    storage-backend: etcd3
    runtime-config:
    allow-privileged: "true"
    service-account-issuer: "kubernetes.default.svc"
    service-account-signing-key-file: "/etc/kubernetes/ssl/sa.key"
...

But the kube-apiserver.yaml isn't updated

...
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=10.0.20.103
    - --allow-privileged=true
    - --anonymous-auth=True
    - --apiserver-count=3
    - --authorization-mode=Node,RBAC
    - --bind-address=0.0.0.0
    - --client-ca-file=/etc/kubernetes/ssl/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-aggregator-routing=False
    - --enable-bootstrap-token-auth=true
    - --endpoint-reconciler-type=lease
    - --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem
    - --etcd-certfile=/etc/ssl/etcd/ssl/node-node03.pem
    - --etcd-keyfile=/etc/ssl/etcd/ssl/node-node03-key.pem
    - --etcd-servers=https://10.0.20.101:2379,https://10.0.20.102:2379,https://10.0.20.103:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/ssl/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/ssl/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP
    - --profiling=False
    - --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client.key
    - --request-timeout=1m0s
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --runtime-config=
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
    - --service-cluster-ip-range=10.233.0.0/18
    - --service-node-port-range=30000-32767
    - --storage-backend=etcd3
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key
    image: k8s.gcr.io/kube-apiserver:v1.17.5
    imagePullPolicy: IfNotPresent
...
cameronbraid commented 4 years ago

using kubespray from master

jaimehrubiks commented 4 years ago

I'm using a different old version and I have the same issue. Any workaround @cameronbraid ?

cameronbraid commented 4 years ago

The workaround is to edit kube-apiserver.yaml by hand

linkvt commented 4 years ago

I added the same variables with kubespray 2.11.0 and it worked without issues when running the upgrade playbook with the following variables:

kube_kubeadm_apiserver_extra_args:
  service-account-issuer: kubernetes.default.svc
  service-account-signing-key-file: /etc/kubernetes/ssl/sa.key

Difference in apiserver ymls, change only ran in DEV, INT shows setup without the variables:

 $ ansible -i inventories/xyz-dev/hosts.ini kube-master -m shell -a 'grep service-account- /etc/kubernetes/manifests/kube-apiserver.yaml' 
dev-node-1 | CHANGED | rc=0 >>
    - --service-account-issuer=kubernetes.default.svc
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key
dev-node-2 | CHANGED | rc=0 >>
    - --service-account-issuer=kubernetes.default.svc
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key
dev-node-3 | CHANGED | rc=0 >>
    - --service-account-issuer=kubernetes.default.svc
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key
 $ ansible -i inventories/xyz-int/hosts.ini kube-master -m shell -a 'grep service-account- /etc/kubernetes/manifests/kube-apiserver.yaml' 
int-node-1 | CHANGED | rc=0 >>
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
int-node-2 | CHANGED | rc=0 >>
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
int-node-3 | CHANGED | rc=0 >>
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub

The pods were also reloaded correctly.

So it might be that this bug appeared in more recent versions but I can't confirm this as I didn't test it.

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kubespray/issues/6004#issuecomment-731592636): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
iamjoemccormick commented 3 years ago

I added the same variables with kubespray 2.11.0 and it worked without issues when running the upgrade playbook with the following variables:

kube_kubeadm_apiserver_extra_args:
  service-account-issuer: kubernetes.default.svc
  service-account-signing-key-file: /etc/kubernetes/ssl/sa.key

Difference in apiserver ymls, change only ran in DEV, INT shows setup without the variables:

 $ ansible -i inventories/xyz-dev/hosts.ini kube-master -m shell -a 'grep service-account- /etc/kubernetes/manifests/kube-apiserver.yaml' 
dev-node-1 | CHANGED | rc=0 >>
    - --service-account-issuer=kubernetes.default.svc
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key
dev-node-2 | CHANGED | rc=0 >>
    - --service-account-issuer=kubernetes.default.svc
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key
dev-node-3 | CHANGED | rc=0 >>
    - --service-account-issuer=kubernetes.default.svc
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key
 $ ansible -i inventories/xyz-int/hosts.ini kube-master -m shell -a 'grep service-account- /etc/kubernetes/manifests/kube-apiserver.yaml' 
int-node-1 | CHANGED | rc=0 >>
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
int-node-2 | CHANGED | rc=0 >>
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
int-node-3 | CHANGED | rc=0 >>
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub

The pods were also reloaded correctly.

So it might be that this bug appeared in more recent versions but I can't confirm this as I didn't test it.

Some clarification for anyone who stumbles across this in the future. As the original submitter indicates setting kube_kubeadm_apiserver_extra_args then running playbook.yml appears to have no effect.

However I noticed this response specifically says everything worked as expected when using the upgrade-cluster.yml playbook, which based on my testing with Kubespray 2.15 does update /etc/kubernetes/manifests/kube-apiserver.yaml.

TL;DR - You have to use upgrade-cluster.yml to apply kube_kubeadm_apiserver_extra_args.

dobarden commented 5 months ago

Thank you for your helpful comments! I'm trying to add --audit-webhook-config-file to the kube-apiserver.yaml with kubespray 2.22 and yep, it works only with upgrade-cluster.yml playbook.