k3s-io / k3s-ansible

Apache License 2.0
2.01k stars 802 forks source link

The same config_yaml is used on server and agents resulting in broken clusters #270

Closed zen closed 10 months ago

zen commented 10 months ago

Consider following config_yaml:

    config_yaml: |
      node-label:
        - cluster=onprem-sin-sim
        - pve_cluster=hpc180-smha-01
      kube-apiserver-arg:
       - service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key
       - service-account-key-file="{{ oidc_pkcs_key }}"
       - service-account-signing-key-file="{{ oidc_priv_key }}"
       - api-audiences="{{ oidc_aud }}"
       - service-account-issuer="{{ oidc_sa_issuer }}"

kube-apiserver-arg is not used on agents resulting in following error:

TASK [k3s_agent : Enable and check K3s service] *****************************************************************************************************************************************************************************************
fatal: [10.0.60.51]: FAILED! => {"changed": false, "msg": "Unable to restart service k3s-agent: Job for k3s-agent.service failed because the control process exited with error code.\nSee \"systemctl status k3s-agent.service\" and \"journalctl -xeu k3s-agent.service\" for details.\n"}
fatal: [10.0.60.52]: FAILED! => {"changed": false, "msg": "Unable to restart service k3s-agent: Job for k3s-agent.service failed because the control process exited with error code.\nSee \"systemctl status k3s-agent.service\" and \"journalctl -xeu k3s-agent.service\" for details.\n"}

and broken cluster.

config_yaml should be different for server and agents