mchmarny / dapr-demos

Collection of personal Dapr demos (bindings, state, pub/sub, service-to-service invocation)
https://dapr.io
MIT License
167 stars 47 forks source link

keda scale up doesn't work despite large queue length lag #7

Closed mchmarny closed 4 years ago

mchmarny commented 4 years ago

Example log

{"level":"debug","ts":1598799261.2208235,"logger":"kafka_scaler","msg":"Group autoscaling has a lag of 147410 for topic messages and partition 0\n"}
{"level":"debug","ts":1598799261.2208657,"logger":"scalehandler","msg":"Scaler for scaledObject is active","ScaledObject.Namespace":"default","ScaledObject.Name":"queue-outoscaling-scaler","ScaledObject.ScaleType":"deployment","Scaler":{}}
{"level":"debug","ts":1598799261.2310224,"logger":"scalehandler","msg":"ScaledObject's Status was properly updated","ScaledObject.Namespace":"default","ScaledObject.Name":"queue-outoscaling-scaler","ScaledObject.ScaleType":"deployment"}

This seems to be an issue with the HPA config (i.e. too old resource version)

W0830 14:54:22.057920       1 reflector.go:289] pkg/mod/k8s.io/client-go@v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:94: watch of *unstructured.Unstructured ended with: too old resource version: 1894736 (1894804)

There is a similar HPA issue being reported here

mchmarny commented 4 years ago

Looks like this has been fixed in upcoming Keda v2 release. More on that here