kedacore / keda

KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
https://keda.sh
Apache License 2.0
8.37k stars 1.06k forks source link

Feature Request: Support Array of App Names or Labels in scaleTargetRef #5717

Closed ucguy4u closed 3 months ago

ucguy4u commented 5 months ago

Proposal

KEDA currently supports scaling individual deployments using the scaleTargetRef field within a ScaledObject. This feature request proposes an enhancement to allow scaleTargetRef to accept an array of application names or a label selector. This will enable a single ScaledObject to scale multiple deployments, which is particularly useful for microservices that need to be scaled in tandem based on common metrics.

Use-Case

This feature would be beneficial in scenarios where multiple services, possibly forming a logical group, need to scale up or down together in response to shared workloads or events. For instance, in microservices architectures, different services that handle parts of the same transaction might need to be scaled simultaneously to maintain consistent performance.

An example use case is a microservices-based e-commerce application where the catalog service, the shopping cart service, and the order processing service may need to scale up during high traffic events like sales or promotions.

Is this a feature you are interested in implementing yourself?

No

Anything else?

Implementing this feature would reduce the overhead of managing multiple ScaledObject resources for operators and could improve the efficiency of using KEDA in larger-scale environments. It also aligns with the cloud-native principles of automation and scalability.

stale[bot] commented 3 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

zroubalik commented 3 months ago

Hi thanks for the input, we have discussed this issue a few times and we decided to keep the current behavior. There are some downsides (managing multiple HPAs, how to deal with fallback, etc).