kubernetes / kops

Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
https://kops.sigs.k8s.io/
Apache License 2.0
15.97k stars 4.65k forks source link

Ability to Set CoreDNS Replicas in Cluster Spec #7726

Closed daviddyball closed 4 years ago

daviddyball commented 5 years ago

I'd love to have the ability to either: a) configure HPA on coredns pods so that it scales with CPU usage or, b) Edit my cluster spec and run N number of replicas e.g.

  spec:
  kubeDNS:
    provider: CoreDNS
    config:
      replicas: 4

If this is already possible it doesn't appear listed in the cluster_spec docs.

joshbranham commented 5 years ago

You are in correct in that this functionality is not eposed via kops today. I would suggest doing what the steps here say, and change the kube-dns-autoscaler to point to your coredns deployment. You can then modify its ConfigMap to change what ratio you would like. Does this suffice your ask?

We could, in fact, expose replicas key and map that to the CoreDNS deployment, but unsure how much of this functionality we want to buble up into the kops API spec.

daviddyball commented 5 years ago

You are in correct in that this functionality is not eposed via kops today. I would suggest doing what the steps here say, and change the kube-dns-autoscaler to point to your coredns deployment. You can then modify its ConfigMap to change what ratio you would like. Does this suffice your ask?

I guess the use-case is more around removing the extra step required after spinning up a new cluster. Baked into kops ClusterSpec directly means that I can set replica-count in my config and fire-and-forget. More of a convenience feature really.

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/kops/issues/7726#issuecomment-593082675): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.