kubecost / cost-analyzer-helm-chart

Kubecost helm chart
http://kubecost.com/install
Apache License 2.0
489 stars 419 forks source link

Single point of configuration for global tolerations and node selector #109

Closed sjmiller609 closed 3 years ago

sjmiller609 commented 5 years ago

What problem are you trying to solve?

We at Astronomer (astronomer.io) use node taints in combination with node-affinity and tolerations to organize components in node pools. In our case, this is because we want multi-tenant components on separate node pool(s) from our platform components. We hope to use Kubecost in our platform components.

When I say 'node selector', I am actually referring to nodeAffinity + nodeSelectorTerms, which is the 'new and improved' way of doing node selectors.

Describe the solution you'd like

I would like for there to be a global configuration at the top-level values.yaml example, node selectors:

global:
  nodeSelectors:
    "astronomer.io/multi-tenant": "false"
    "astronomer.io/another-one": "ok"

I want this to end up on the containers using affinity.

# in the container spec
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: "astronomer.io/multi-tenant"
              operator: In
              values:
              - "false"
          - matchExpressions:
            - key: "astronomer.io/another-one"
              operator: In
              values:
              - "ok"

example, tolerations: values.yaml

global:
      tolerations:
      - key: "platform"
        operator: "Equal"
        value: "true"
        effect: "NoSchedule"

outcome:

# (output of kubectl describe pod command on any component)
Tolerations:     
                 platform=true:NoSchedule

Describe alternatives you've considered

I have noticed some configurations like this exist in the subcharts. I will try to do it by configuring each sub-chart appropriately by passing the values from the top-level chart to the subcharts.

How would users interact with this feature?

helm values.

dwbrown2 commented 5 years ago

Thanks for the request, Steven! Like you mentioned, our Prometheus and Grafana subcharts should support affinities. Let me know if their implementation looks good and we can look to support something similar.

You probably already saw but here is our initial implementation for basic node selectors: https://github.com/kubecost/cost-analyzer-helm-chart/commit/18838d8cd283ddf2380e6e6099ba662ecf705435

MattJeanes commented 3 years ago

I've just had to taint my Windows node on AKS as this chart does not respect the nodeSelector properly and was trying to schedule failing pods on it. Other major charts like nginx-ingress, cert-manager, keda, kubernetes-dashboard etc all support this properly - can we make this too?

AjayTripathy commented 3 years ago

I've just had to taint my Windows node on AKS as this chart does not respect the nodeSelector properly and was trying to schedule failing pods on it. Other major charts like nginx-ingress, cert-manager, keda, kubernetes-dashboard etc all support this properly - can we make this too?

Hi @MattJeanes can you provide a little more detail, ideally in a separate bug? What did you provide as values for .Values.nodeselector https://github.com/kubecost/cost-analyzer-helm-chart/blob/release-1.71.0/cost-analyzer/values.yaml#L203

MattJeanes commented 3 years ago

Hi @AjayTripathy I have raised the issue here, apologies for the lack of logs / screenshots as I had already worked around the issue as above: https://github.com/kubecost/cost-analyzer-helm-chart/issues/712

dwbrown2 commented 3 years ago

@MattJeanes @AjayTripathy I believe we can close this based on the #712 but let me know if there is more outstanding work.

MattJeanes commented 3 years ago

We have a nodeSelector now but does that also cover affinity? I'm happy the nodeSelector bit is all done and working

AjayTripathy commented 3 years ago

Tolerations/affinity are configurable from values, so I think this is good to go!