kubernetes-sigs / cluster-api

Home for Cluster API, a subproject of sig-cluster-lifecycle
https://cluster-api.sigs.k8s.io
Apache License 2.0
3.53k stars 1.3k forks source link

[cabpk] Add Support for Component Config configuration files #1584

Closed fabriziopandini closed 3 years ago

fabriziopandini commented 5 years ago

/kind feature

Describe the solution you'd like SCL is pushing the adoption of component config objects, and kubeadm already accepts KubeletConfiguration and KubeProxyConfiguration as part of its config file. I would like to be able to use component config object with CABPK as well.

Anything else you would like to add: The first possible approach to achieve this is to add KubeletConfiguration and KubeProxyConfiguration types to the KubeadmConfig object, but this comes with pros (e.g. Open API validation) and cons (e.g. more dependencies and dependencies management)

So, I would like to put on the table also some alternative approach, based on the assumption that CABPK only needs to pass component configs to kubeadm (no need of any logic based on the component config content). If this assumption holds, component configs can be treated basically as "special" Files, so we can consider if:

/cc @chuckha @detiber

detiber commented 5 years ago

I'm torn on this, if we just pass through without any type of validation, then we risk hard to debug errors.

That said, if we do add basic validation we still wouldn't have all the validation that is done on the backend and could still end up with hard to debug errors...

The question then begins just how easy do we want to make it for users to shoot themselves in their own foot?

chuckha commented 5 years ago

/priority important-longterm /milestone Next

chuckha commented 4 years ago

I'm ok with big foot-guns for now. Especially given the work to surface kubeadm init logs into the machine. This should mitigate the hard-to-debug validation concerns. But until then, I don't see a problem with adding this functionality as outlined by @fabriziopandini.

I'm a little wary on the bool signaling a ComponentConfig...I think the original issue is outlining two possible approaches?

If so, I'm in favor of ComponentConfigs and having them be appended to the kubeadm config.

/milestone v0.3.0 /priority important-longterm

akutz commented 4 years ago

Related to https://github.com/kubernetes-sigs/cluster-api-bootstrap-provider-kubeadm/issues/131

ncdc commented 4 years ago

@fabriziopandini do you see this as a hard requirement for v1a3?

fabriziopandini commented 4 years ago

@ncdc As far as I know the current API for CABPK is enough for most of the use-cases, so IMO this is a backlog and can be moved to the next milestone.

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

vincepri commented 4 years ago

/remove-lifecycle stale

vincepri commented 4 years ago

/area ux

Arvinderpal commented 4 years ago

Is there a documented workaround for this? For example, if I want to specify a custom KubeProxyConfiguration?

sb464f commented 4 years ago

We are looking for this feature to enable ipvs mode in kubeProxyConfiguration

sb464f commented 4 years ago

Workaround: adding it to prekubeadmcommands - /tmp/generate-kube-proxy.sh creating a file with script

    files:
      - path: /tmp/generate-kube-proxy.sh
        permissions: "0700"
        owner: root:root
        content: |
          #!/bin/bash
          for i in $(ls /tmp | grep kubeadm); do
              cat <<EOF>> /tmp/$i
          ---
          kind: KubeProxyConfiguration
          apiVersion: kubeproxy.config.k8s.io/v1alpha1
          mode: ipvs
          EOF
          done
Arvinderpal commented 4 years ago

Trying to understand how onerous it would be to add KubeletConfiguration and KubeProxyConfiguration to KubeadmConfig -- the first possible approach noted by @fabriziopandini.

Looking at the code, it appears that kubeadm itself does no validation on either KubeletConfiguration or KubeProxyConfiguration. kubeadm has the framework for component specific validation, but currently, that does nothing: https://github.com/kubernetes/kubernetes/blob/a054010d032b301e495d1a421f53b9a37a0a0109/cmd/kubeadm/app/componentconfigs/configset.go#L244

What kubeadm does provide is some default values for certain fields if none are specified. Here is the one for kube-proxy: https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/componentconfigs/kubeproxy.go#L86

Beyond KubeletConfiguration and KubeProxyConfiguration, I don't see any other xyzConfiguration, as all other K core components have their fields split across the init/join/cluster configurations.

CABPK could take a similar approach to kubeadm -- treat these two configurations as just pass-through w/o validation. We would document this of course and possibly provide some troubleshooting steps in case there are failures (e.g. go look at the cloud-config logs on machine-x).

In any case, it would be great to prioritize this as people who are building more complex clusters will definitely want to tweak kube-proxy and kubelet settings.

cprivitere commented 4 years ago

Here's a user story for you in case it helps motivate folks. The need to edit kube-proxy to do metrics binding to 0.0.0.0 instead of 127.0.0.1 so that prometheus can get metrics from kube-proxy on cluster-api provisioned clusters.

cprivitere commented 4 years ago

One other workaround a co-worker came up with for editing the kube-proxy config is to simply update the kube-proxy configmap:

So apply a confgmap named "kube-proxy" into the kube-system namespace:

data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.96.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: "0.0.0.0:10249"
    mode: ""
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      networkName: ""
      sourceVip: ""
<etc....>

Is there a major downside to just editing the configmap? If nothing else I hope this at least gives folks another option for configuring kube-proxy in the interim till this is implemented.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

MPV commented 3 years ago

One other workaround a co-worker came up with for editing the kube-proxy config is to simply update the kube-proxy configmap:

So apply a confgmap named "kube-proxy" into the kube-system namespace:

<etc....>

Is there a major downside to just editing the configmap? If nothing else I hope this at least gives folks another option for configuring kube-proxy in the interim till this is implemented.

I guess one downside of editing the configmap for kube-proxy is that it duplicates the configuration of the cluster, for example things like clusterCIDR: 10.96.0.0/16.

I very much think it's still desirable to allow this to be configurable as part of the KubeadmConfig.

I'm also having this use case (where I want to monitor kube-proxy using Prometheus):

Here's a user story for you in case it helps motivate folks. The need to edit kube-proxy to do metrics binding to 0.0.0.0 instead of 127.0.0.1 so that prometheus can get metrics from kube-proxy on cluster-api provisioned clusters.

vincepri commented 3 years ago

/lifecycle frozen

vincepri commented 3 years ago

/remove-lifecycle frozen

@fabriziopandini What's the status of this issue? Could this be tackled as part of the node agent?

fabriziopandini commented 3 years ago

I have no strong opinions here. There are use cases for this, but at the same time it isn't a priority and this keeps getting pushed down in the list. Fine for me to keep this in the radar or close and eventually re-open if there is a stronger ask/capacity to tackle this.

neolit123 commented 3 years ago

linking to the upstream kubelet issue about migrating flags to config: https://github.com/kubernetes/kubernetes/issues/86843

fabriziopandini commented 3 years ago

/close In favour of https://github.com/kubernetes-sigs/cluster-api/issues/4464, given that it provides a more specific use cases for KubeletConfiguration at least. Eventually we can create a similar issue for Kubeadm-proxy as well, but given that the two configurations are managed in very different ways in a running cluster, let's keep the issues separated

k8s-ci-robot commented 3 years ago

@fabriziopandini: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/cluster-api/issues/1584#issuecomment-817682108): >/close >In favour of https://github.com/kubernetes-sigs/cluster-api/issues/4464, given that it provides a more specific use cases for KubeletConfiguration at least. >Eventually we can create a similar issue for Kubeadm-proxy as well, but given that the two configurations are managed in very different ways in a running cluster, let's keep the issues separated > Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.