Open bjethwan opened 3 years ago
Hi, at the moment there is no queuing mechanism in the reloader. Reloader triggers the resource update as soon as the change event happens. Multiple change events at the same time can trigger multiple rollouts. If you want to throttle the rollouts per namespace, maybe try to run the reloader in namespace scope instead of cluster scope. Otherwise, we also welcome the pull requests with the new features and enhancements.
I'm experiencing a similar issue when deploying a full software stack using Kustomize : a single deployment - with reloader auto activated - generates simultaneously 2 replicaSets that can potentially conflict if we need 1 replica only.
The reason is that Kustomize sends the whole manifests ConfigMaps+Secrets+SealedSecrets+Deployments at once, so it triggers immediately a new replicaSets.
I'm not sure it's a Reloader issue, however the multiple replicaSets issue is solved when setting the auto reloader to "false".
@faizanahmad055 maybe we can use simple queuing task with redis-server
Hi, at the moment there is no queuing mechanism in the reloader. Reloader triggers the resource update as soon as the change event happens. Multiple change events at the same time can trigger multiple rollouts. If you want to throttle the rollouts per namespace, maybe try to run the reloader in namespace scope instead of cluster scope. Otherwise, we also welcome the pull requests with the new features and enhancements.
Do you have any plan for this ? The reloader could list resources to reload and add a random delay within a configurable window to reload those resources or maybe use configurable batches of resources, waiting for them to be healthy/timeout and moving to the other batch ?
I have a situation wherein I've up to 3kto 4k secrets changing in an instant. I'm using Reloader together with https://github.com/appscode/kubed. A change in one secret is synced to all the those copied over in every k8s namespace. I don't want all the rollout triggered at the same time. Is there a way to throttle the rolling update of deployments across the cluster?