Closed commarla closed 4 years ago
Maybe it comes from this https://github.com/jrasell/sherpa/blob/master/pkg/autoscale/autoscale.go#L76
We may never test the scale out if there is a scale in. I think it is better to test the "out" before the "in" in the switch case to ensure the availability of the service.
@commarla I think you're exactly right and I had this on my mind a few days ago to look into a figure out if it was a problem. I'll get right on to this; thanks for the detailed report!
The simple solution is to put the scale-out checks ahead of scale-in checks which would catch the situation. I think its important though for operators to understand if jobs have large differences in resource consumptions, so a slightly more complex solution would be warranted.
@jrasell thanks for the quick answer. I have build my own version to fix my use case. I understand you have to think about it to cover other cases. You can close my PR if you want.
Describe the bug I have a strange behaviour I see a scale in instead of a scale out.
To reproduce My config is the following (I use nomad meta) :
In the log I have
With a CPU usage = 120% I should have a scale out and not a scale in. It is a conflict with my memory that is under 30% ?
Expected behavior A scale out
Environment:
Sherpa server information (retrieve with
sherpa system info
):Sherpa CLI version (retrieve with
sherpa --version
): docker imagejrasell/sherpa:0.2.1
Server Operating System/Architecture: Docker 19.03.2 Debian strech 9.11 Linux sherpa 4.19.0-0.bpo.6-amd64 SMP Debian 4.19.67-2+deb10u1~bpo9+1 (2019-09-30) x86_64 Linux
Sherpa server configuration parameters:
Nomad client configuration parameters (if any): There is nothing specific in my nomad config
Consul client configuration parameters (if any): There is nothing specific in my consul config