kubernetes-retired / kube-aws

[EOL] A command-line tool to declaratively manage Kubernetes clusters on AWS
Apache License 2.0
1.12k stars 295 forks source link

Run kube2iam on controllers #823

Closed cknowles closed 7 years ago

cknowles commented 7 years ago

I think we should add this to kube2iam Daemon Set so it runs on controllers as well. I wanted to check first if there are reasons we do not wish to do this when kube2iam is enabled? i.e. do we need this to be a switch or just enable across controllers when kube2iam is switched on. @camilb @mumoshu.

tolerations:
  - key: node.alpha.kubernetes.io/ismaster
    effect: NoSchedule
  - key: node.alpha.kubernetes.io/role
    operator: Equal
    value: master
    effect: NoSchedule
cknowles commented 7 years ago

Or align to what we do on node drainer:

tolerations:
  - operator: Exists
    effect: NoSchedule
  - operator: Exists
    effect: NoExecute
  - operator: Exists
    key: CriticalAddonsOnly
camilb commented 7 years ago

@c-knowles One reason I see is that kube-resources-autosave will fail. We are using --auto-discover-default-role flag on kube2iam that allows assuming the IAMRoleWorker which has permissions to write to S3. To solve this issue, we have to add write permissions to S3 to IAMRoleController.

mumoshu commented 7 years ago

@c-knowles In addition to that, how about allowing to more explicitly configure where the kube2iam pod could be scheduled like we currently do for cluster-autoscaler?

What we do for CA today is as follows:

This way, cluster-autoscaler is scheduled only on nodes eligible to run it. One pitfall: cluster.yaml gets more verbose 😃

cknowles commented 7 years ago

@mumoshu I'm not sure we need such flexibility. When we spoke the other day, you mentioned about removing --auto-discover-default-role, if we do that then we should likely create separate roles for autosaver and other components when kube2iam is enabled.

mumoshu commented 7 years ago

@c-knowles Thanks for the confirmation 👍 I agree with you now. If you would want to enable kube2iam, just enforcing it to be deployed to every node is the way to go. Otherwise, there could be way(s) to exploit iam permissions associated to the node from vulnerable containers.

Btw, back to the original topic, I have no objection to add the tolerations you've suggested. Let's align the set of tolerations with the one for node drainer.

camilb commented 7 years ago

Btw, back to the original topic, I have no objection to add the tolerations you've suggested. Let's align the set of tolerations with the one for node drainer

@mumoshu This was already fixed in #879.

mumoshu commented 7 years ago

@camilb Thanks. I now remember that I've reviewed it 😉 It seems like I was buried in issues.

mumoshu commented 7 years ago

@c-knowles Can we close this as resolved, or would #912 by any chance be a requisite for you to run kube2iam also on controller nodes?

cknowles commented 7 years ago

@mumoshu I'm happy we've covered this one in https://github.com/kubernetes-incubator/kube-aws/pull/879.