During my testing, I used the current user with all required privileges but failed to notice that, after switching to the internalkibana_system user, it lacked the manage_autoscaling privilege required for the GET /_autoscaling/policy API.
As a result, the isMlAutoscalingEnabled flag, which we rely on in the Start Deployment modal, was always set to false. This caused a bug in scenarios with zero active ML nodes, where falling back to deriving available processors from ML limits was not possible.
You can check the created deployment, it correctly identifies ML autoscaling:
Summary
During my testing, I used the current user with all required privileges but failed to notice that, after switching to the internal
kibana_system
user, it lacked the manage_autoscaling privilege required for theGET /_autoscaling/policy
API.As a result, the
isMlAutoscalingEnabled
flag, which we rely on in the Start Deployment modal, was always set to false. This caused a bug in scenarios with zero active ML nodes, where falling back to deriving available processors from ML limits was not possible.You can check the created deployment, it correctly identifies ML autoscaling:
Checklist
Check the PR satisfies following conditions.