Open runningman84 opened 2 years ago
Not sure this will be possible without increased collaboration. We would need to know the difference between this tool providing an override vs. it's just a normal 0.
From my point of view this problem is independent from kube-downscaler. Because even in normal operations you could also modify an existing deployment and set the replicas to 0. In this case the hpa would not touch it anymore.
For some reason this behaviour is different once you use keda which implements hpa in its own way.
We could also talk to the kube-downscaler team to implement a solution in their product which could annotate the ScaledObject with some tag which would disable keda in the planned downtimes. This would not be needed if keda would work like a normal hpa setup.
Is there any annotation which disables the ScaledObject until the tag is removed?
This functionality is tangentially similar to https://github.com/kedacore/keda/issues/944. Maybe the design of the ManualScaleToZero
CRD proposed there could be modified to support not scaling up from zero?
cc/ @tomkerkhove since you were the author of #944
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.
Interesting use-case. So basically you use kube-downscaler to scale down based on time? If so, what if KEDA allowed defining scale-down through CRON scaler?
Our use case is to downscale deployments and cronjob (they are not so important because a corresponding cron schedule can do the sane thing) to zero at specific timeframes.
kube-downscaler is a well known product to do handle this usecase in k8s.
From my point of view I would be also fine if KEDA would have a similar native feature to solve this usecase without kube-downscaler.
What I can tell right now is that the current bug prevents us and others from swithing existing workloads to keda.
So what I'm taking away from this is that we currently assume for scale-out and having scale-in support would be nice as well for our triggers.
If we do that, you could use our cron trigger to achieve this with scale-in scenario. This could also be a cron-scaler only feature as well.
Thoughts @kedacore/keda-maintainers?
also interested in this feature. We want to scale down all workloads in test environment to zero, I tried to use keda cron scaler to achieve this but seems it can't do scale-down, just scale-up. Seems i need to use kube-downscaler as well, but it would be better to have this feature in keda because we already use it for other stuff.
Let's do a POC to change the CRON scaler to achieve this and see what the impact is for the rest. I agree this should become part of KEDA as a nice addition.
There's a simple way of using both kube-downscaler and Keda, I've tested it and it works well. You can add the --include-resources=deployments,scaledobjects
flag to kube-downscaler and it will pause the scaled object and scale it to 0 according to your kube-downscaler schedule:
kube-downscaler README
Report
We use kube-downscaler (https://codeberg.org/hjacobs/kube-downscaler) in order to scale down workloads at night in our dev/test environments.
This used to work fine with prometheus-adapter.
Our deployment annotions look like this: downscaler/uptime: Mon-Fri 20:00-23:00 Europe/Berlin
Expected Behavior
Do not touch deployments if they are scaled to 0
Actual Behavior
Keda is scaling the deployments to 2 instead of ignoring them.
Setting the minrepliacs to 0 in the scaledobject does not change the behaviour.
Steps to Reproduce the Problem
in the downtime timeframe pods are beeing created and destroyed all the time...
Logs from KEDA operator
KEDA Version
2.4.0
Kubernetes Version
1.20
Platform
Amazon Web Services
Scaler Details
Pod
Anything else?
One solution is to downscale keda in the downtime timeframes but the current chart does not expose the keda deployment annotionas to the values.yaml
This solution would also be binary because not all projects have the same downtime...