The tool is currently "pessimistic" (like Scrooge!) and assumes that there is no available resources to be used for workload placement. So if your workload would require 2.5 servers worth of the constrained resource, it will calculate what it will cost for 3 servers in order to fit the workload.
There's no good way for the tool to be aware of the current usage of the node pool the workload is being deployed to, so it can't really do any better automatically. We could allow the user to add some leeway to the calculation though by setting an "optimism" level between 0 (default) and 100 (entire workload fits on existing resources.. is it free?!) to add some level of flexibility.
Formula: (constrained resource delta (total cores or mem required for workload) * (100 - optimism)) / 100
If #16 lands, use this value to go from grumpy (Bah Humbug!) to optimistic (What a lovely day, good sir!).
The tool is currently "pessimistic" (like Scrooge!) and assumes that there is no available resources to be used for workload placement. So if your workload would require 2.5 servers worth of the constrained resource, it will calculate what it will cost for 3 servers in order to fit the workload.
There's no good way for the tool to be aware of the current usage of the node pool the workload is being deployed to, so it can't really do any better automatically. We could allow the user to add some leeway to the calculation though by setting an "optimism" level between 0 (default) and 100 (entire workload fits on existing resources.. is it free?!) to add some level of flexibility.
Formula: (constrained resource delta (total cores or mem required for workload) * (100 - optimism)) / 100
If #16 lands, use this value to go from grumpy (Bah Humbug!) to optimistic (What a lovely day, good sir!).