rancherfederal / rke2-aws-tf

MIT License
84 stars 68 forks source link

Setting kube-apiserver.yaml values on deployment #48

Closed kdalporto closed 1 year ago

kdalporto commented 2 years ago

I'm working through implementing IAM Roles for Service Accounts on a RKE2 deployment which requires updates to some of the arguments in the kube-apiserver.yaml file. An issue is that the file is not persistent, such that if the main node goes down and is replaced, it reverts back to the old configuration.

Is there a simple way to update arguments on deployment or is the kube-apiserver.yaml file configured somewhere in the repo that could be updated prior to deployment?

Essentially what needs to be configured is:

spec:
  containers:
  - command:
    - kube-apiserver
    - --service-account-issuer=<OIDC provider URL>
    - --service-account-key-file=/var/lib/rancher/rke2/server/irsa/sa-signer-pkcs8.pub
    - --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key
    - --service-account-signing-key-file=/var/lib/rancher/rke2/server/irsa/sa-signer.key

  volumeMounts:
  - mountPath: /var/lib/rancher/rke2/server/irsa
    name: dir3

  volumes:
  - hostPath:
    path: /var/lib/rancher/rke2/server/irsa
    type: DirectoryOrCreate
    name: dir3

The only real solution I've found that might work is updating the rke2-init.sh and having it manually modify the file on the instance, or calling the RKE2 server cli to possibly inject those values in.

Is there a better/supported way to do this that I'm not seeing?

adamacosta commented 2 years ago

Since the kube-apiserver runs in a container, startup arguments come from the rke2 config file. It is exposed through the terraform module via the rke2_config argument. Your terraform file should contain something like this:

module "rke2" {
...
rke2_config = <<-EOT
kube-apiserver-arg:
  - "service-account-issuer=<OIDC provider URL>"
  - "service-account-key-file=/var/lib/rancher/rke2/server/irsa/sa-signer-pkcs8.pub"
  - "service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key"
  - "service-account-signing-key-file=/var/lib/rancher/rke2/server/irsa/sa-signer.key"
EOT
...
}

You can see the config file documentation at https://docs.rke2.io/install/install_options/install_options/#configuration-file and get the full list of arguments available that can be passed to the rke2 server at https://docs.rke2.io/install/install_options/server_config/.

kdalporto commented 2 years ago

Thanks @adamacosta I will look into this!

kdalporto commented 2 years ago

@adamacosta I tried this out and it worked! Seems like I didn't even need to specify the volume and mounts which I thought would be required for the pod to access the files on the master node. The only concern I have is that it will not allow you to specify two service-account-key-file's. The sa-signer-pkcs8.pub is the new key I need for IAM Roles for Service Accounts (IRSA) to work.

Do you know if the default tls service.key in there even matters or will it just utilize the new service-account-key-file for IRSA that I provided for anything that the default key was used for? I'm able to successfully deploy rke2 with only specifying my new key, but I'm not sure how to tell/test if replacing it will cause issues in other areas.

adamacosta commented 2 years ago

I suspect it's just going to use your key and not the generated key, but I'm not totally certain of that. You might consider asking on the Rancher community forum or the Rancher users slack server to see if you can get an answer from the RKE2 developers directly.