Closed jeremylvln closed 1 month ago
Update: If I understood correctly the Fleet API, it seems that the scale
subresource is at least declared (implemented?). So there is already a step in solution 3.
The scale subresource has been implemented on Fleet
for quite a while and is used in the fleet quickstart: https://agones.dev/site/docs/getting-started/create-fleet/#3-scale-up-the-fleet
I'm not sure I quite understand your issue though. If you use the scale subresource, that is just an indirect way of changing the replicas count in the spec for the Fleet
, is it not?
Further down in the HPA documentation that you linked it says:
When you configure autoscaling for a Deployment, you bind a HorizontalPodAutoscaler to a single Deployment. The HorizontalPodAutoscaler manages the replicas field of the Deployment.
So even if the HPA controller is changing the replicas field via the scale subresource, it still manages that field. The scale subresource is just a generic interface to the replicas field so that the HPA controller doesn't need to have code added for any new resource types that can be scaled up and down.
Also, to your point about conflicting controllers, the HPA documentation says:
When an HPA is enabled, it is recommended that the value of spec.replicas of the Deployment and / or StatefulSet be removed from their manifest(s). If this isn't done, any time a change to that object is applied, for example via kubectl apply -f deployment.yaml, this will instruct Kubernetes to scale the current number of Pods to the value of the spec.replicas key. This may not be desired and could be troublesome when an HPA is active.
and the same is true for a Fleet. You can prevent your controller and the fleet autoscaler from fighting over the replicas field but not including replicas in your fleet specification, and allowing the fleet autoscaler to set the field on your behalf.
'This issue is marked as Stale due to inactivity for more than 30 days. To avoid being marked as 'stale' please add 'awaiting-maintainer' label or add a comment. Thank you for your contributions '
This issue is marked as obsolete due to inactivity for last 60 days. To avoid issue getting closed in next 30 days, please add a comment or add 'awaiting-maintainer' label. Thank you for your contributions
What happened:
I created a
FleetAutoscaler
managing aFleet
with aBuffer
policy. The autoscaler behavior is to modify the spec of theFleet
with an updated replicas count. This is wrong because if another controller is in charge of managing Agones resources, this controller and Agones will fight and update consequently thereplicas
field because one will think the value is wrong and cycle with that.I consider this more a bug than a feature request because it should really not be done this way. But this ticket has a step in both categories.
What you expected to happen:
The expected behavior is something like Kubernetes built-in
Deployment
andHorizontalPodAutoscaler
where the spec is not modified, but the handling is rather internal. So I think you have three ways of handling this issue:From Kubernetes's documentation about HPAs (link):
How to reproduce it (as minimally and precisely as possible):
Create a dummy
Fleet
. Create aFleetAutoscaler
with dummy values.Anything else we need to know?:
Environment:
kubectl version
): 1.27