rancher / local-path-provisioner

Dynamically provisioning persistent local storage with Kubernetes
Apache License 2.0
2.07k stars 440 forks source link

Remove duplicate labels and add ability to set helperpod resource requests/limits #394

Closed visokoo closed 1 month ago

visokoo commented 3 months ago

Piggybacking off this PR request from @runningman84: https://github.com/rancher/local-path-provisioner/pull/393 but retained the selector.matchLabels piece since it's not an immutable field after the release has been deployed and will actually block helm upgrades after the fact.

In addition to fixing the duplicate labels, this also adds the ability to add k8s resource requests/limits to the busybox helper-pod that's launched.

Tested this change locally in my environment and it's deploying and upgrading as expected with this flux HelmRelease manifest:

---
apiVersion: helm.toolkit.fluxcd.io/v2beta2
kind: HelmRelease
metadata:
  name: local-path-provisioner
  namespace: local-path-provisioner
spec:
  releaseName: local-path-provisioner
  interval: 10m
  chart:
    spec:
      chart: ./deploy/chart/local-path-provisioner
      version: 0.0.25-dev
      sourceRef:
        kind: GitRepository
        name: local-path-provisioner
        namespace: local-path-provisioner
      reconcileStrategy: Revision
  values:
    storageClass:
      provisionerName: local-path
    resources:
      requests:
        memory: 150Mi
        cpu: 60m
      limits:
        memory: 300Mi
        cpu: 120m
    helperPod:
      resources:
        requests:
          memory: 100Mi
          cpu: 10m
        limits:
          memory: 200Mi
          cpu: 20m

    # Number of retries of failed volume provisioning. 0 means retry indefinitely.
    provisioningRetryCount: 5
    # Number of retries of failed volume deletion. 0 means retry indefinitely.
    deletionRetryCount: 5
visokoo commented 2 months ago

@derekbit, anything else I need to do here or are we good to merge?

derekbit commented 1 month ago

@derekbit, anything else I need to do here or are we good to merge?

No. I've merged it. It will be included in v0.0.27 released at the end of this May. Thank you.