aws / aws-sdk

Landing page for the AWS SDKs on GitHub
https://aws.amazon.com/tools/
Other
68 stars 12 forks source link

Can update the desired tasks count independent of existing service auto scaling in ECS Service #625

Closed vishnukumarkvs closed 2 months ago

vishnukumarkvs commented 8 months ago

Describe the bug

Able to update desired task count in an ECS service regardless of min and max count of present in service autoscaling. For example, we are able to update desired count to 0 even if we have min task count as 1 in service autoscaling.

Expected Behavior

Desired tasks count in ECS service should be within min and max boundaries present in service autoscaling.

Current Behavior

Desired count can be updated independently of Service autoscaling min, max boundaries. The number of running tasks are same as desired count, not able to autoscale within boundaries mentioned in service autoscaling.

Additional context

Reproduction Steps

func upscaleEcsService(client *ecs.Client, autoScalingClient *applicationautoscaling.Client,  clusterName string, serviceName string) {
    input := &ecs.UpdateServiceInput{
        Service:      aws.String(serviceName),
        Cluster:      aws.String(clusterName),
        DesiredCount: aws.Int32(3),
    }
    _, err := client.UpdateService(context.TODO(), input)

    if err != nil {
        fmt.Println(err)
    }
}

Possible Solution

No response

Additional Information/Context

This can be fixed if we are using applicationautoscaling client. This will automatically update the desired count to the nearest boundary

Example code below

func upscaleEcsService( autoScalingClient *applicationautoscaling.Client,  clusterName string, serviceName string) {
    serviceResourceID := fmt.Sprintf("service/%s/%s",clusterName,serviceName)

    newMinCapacity := int32(2)
    newMaxCapacity := int32(3)

    scalableTargetUpdate := &applicationautoscaling.RegisterScalableTargetInput{
        ServiceNamespace: types.ServiceNamespaceEcs,
        ResourceId:      aws.String(serviceResourceID),
        ScalableDimension: types.ScalableDimensionECSServiceDesiredCount,
        MinCapacity:     aws.Int32(newMinCapacity),
        MaxCapacity:     aws.Int32(newMaxCapacity),
    }

    _, err := autoScalingClient.RegisterScalableTarget(context.TODO(), scalableTargetUpdate)
    if err != nil {
        fmt.Println("Error updating scalable target:", err)
        os.Exit(1)
    }else{
        fmt.Println("Updated autoscaling policy")
    }
}

AWS Go SDK V2 Module Versions Used

module go-job

go 1.21.1

require (
    github.com/aws/aws-sdk-go v1.45.25
    github.com/aws/aws-sdk-go-v2/config v1.19.0
    github.com/aws/aws-sdk-go-v2/service/applicationautoscaling v1.22.7
    github.com/aws/aws-sdk-go-v2/service/ecs v1.30.4
)

require (
    github.com/aws/aws-sdk-go-v2 v1.21.2 // indirect
    github.com/aws/aws-sdk-go-v2/credentials v1.13.43 // indirect
    github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.13.13 // indirect
    github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.43 // indirect
    github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.37 // indirect
    github.com/aws/aws-sdk-go-v2/internal/ini v1.3.45 // indirect
    github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.37 // indirect
    github.com/aws/aws-sdk-go-v2/service/sso v1.15.2 // indirect
    github.com/aws/aws-sdk-go-v2/service/ssooidc v1.17.3 // indirect
    github.com/aws/aws-sdk-go-v2/service/sts v1.23.2 // indirect
    github.com/aws/smithy-go v1.15.0 // indirect
    github.com/jmespath/go-jmespath v0.4.0 // indirect
)

Compiler and Version used

go version go1.21.1 windows/amd64

Operating System and version

Windows 11

RanVaknin commented 8 months ago

Hi @vishnukumarkvs ,

Thanks for opening the issue. The behavior you are seeing is a service side issue, and not SDK related.

I have two theories of why this is happening.

  1. Updating Count through the ECS client is considered a high level interface which would not be evaluated against the scaling group's policy. In this case this behavior would be intentional. The problem here is this I cannot find any documentation supporting this.

  2. This was an oversight / service side limitation. Since ECS and AutoScaling are two separate services there might be some issues with evaluating the policy cross service. In this case this would be a bug.

Since this is not a specific behavior to the SDK, its not directly actionable by the SDK team. Instead, I reached out to the ECS service team in order to get some clarity on the matter.

Thanks, Ran~

P103299717

RanVaknin commented 2 months ago

Hi @vishnukumarkvs ,

I heard back from the service team, this is the expected behavior.

If a service's desired count is set below its minimum capacity value, and an alarm triggers a scale-out activity, Service Auto Scaling scales the desired count up to the minimum capacity value and then continues to scale out as required, based on the scaling policy associated with the alarm. However, a scale-in activity does not adjust the desired count, because it is already below the minimum capacity value.

If a service's desired count is set above its maximum capacity value, and an alarm triggers a scale in activity, Service Auto Scaling scales the desired count out to the maximum capacity value and then continues to scale in as required, based on the scaling policy associated with the alarm. However, a scale-out activity does not adjust the desired count, because it is already above the maximum capacity value.

Thanks, Ran~

github-actions[bot] commented 2 months ago

This issue is now closed.

Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.