Open jeffhollan opened 5 years ago
Ok let's run this in kubeflow😁
Le mer. 15 mai 2019 à 16:24, Jeff Hollan notifications@github.com a écrit :
A very “blue sky” feature but would be amazing to have KEDA look at historic data and patterns for deployments to try to predict when events may be coming in and scale proactively
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kedacore/keda/issues/197?email_source=notifications&email_token=AB3W6GMUBS3WEBK44PHG7K3PVQMLBA5CNFSM4HND3DV2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4GT6CFWQ, or mute the thread https://github.com/notifications/unsubscribe-auth/AB3W6GN2BHTCZZAVDY565S3PVQMLBANCNFSM4HND3DVQ .
A very “blue sky” feature but would be amazing to have KEDA look at historic data and patterns for deployments to try to predict when events may be coming in and scale proactively
Especially for RabbitMQ scaler, that would be great. If we can use some additional variables too such as "consumer utilization", "consumer ack" and "delivery" with the "queueLength" variable, maybe we can scale pods in a reactive way?
These days I'm playing with KEDA and testing it on our staging area. And KEDA currently scaling our pods based on the "queueLength" variable. It's great. But our some RabbitMQ consumers doing I/O based operations. If KEDA keeps continuing the scaling as a linear based on the "queueLength" variable, our other I/O services, that in the consumer, are starting to get in the bottleneck.
That's a really interesting idea. A few months ago I was doing research about best practices/patterns for autoscaling application and I stumbled on a research paper covering doing autoscaling using a predictive model.
KEDA look at historic data
@jeffhollan what historic data you had in mind? Also, should it be something maintained by KEDA itself or should this be a pluggable solution so anyone can use a custom model / algo?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.
@jeffhollan
We were dreaming about the same thing and developed this kind of solution. It works based on our simple, but working AI model and predicts pretty well. Maybe it's not the "blue sky" you're looking for, but you should definitely take a look.
A hill-climbing algorithm like is used for the CLR Thread Pool could be a candidate for this. It basically will add threads (or instances in this case) and see if it makes a positive impact on backlog, and if not then remove that instance.
It's a reactive as opposed to predictive approach but it may be more generally applicable as opposed to needing to specify the cyclical period to look back on (hourly batch process? daily user load? weekly jobs? one-off events?). It also doesn't require additional storage of historical data.
Given the way KEDA interfaces with HPAs, it would be a bit round-about (needing to manipulate reported metric values to get the desired instance count directly), but that's the interface we have to work with without rewriting a new pod auto-scaler.
I'd love to see a generic interface that will be siting between metrics reported from scalers and HPA. There we can "manipulate" the metrics the way we would like to. For example adding more logic in evaluation of metrics from multiple triggers or pluging in some AI/ML model. The only option to "manipulate" metrics that we have today is the fallback
feature, which is great but it is not generic enough. Though some generic interface would be much better.
Writing a new pod autoscaler is something I'd like to avoid 😅
A very “blue sky” feature but would be amazing to have KEDA look at historic data and patterns for deployments to try to predict when events may be coming in and scale proactively