aws / containers-roadmap

This is the public roadmap for AWS container services (ECS, ECR, Fargate, and EKS).
https://aws.amazon.com/about-aws/whats-new/containers/
Other
5.22k stars 321 forks source link

[Fargate] [request]: Service Connect Health Checks #2334

Open muzfuz opened 6 months ago

muzfuz commented 6 months ago

Community Note

Tell us about your request Service Connect does not support application health checks. This means it attempts to route traffic to containers before they're ready.

We would like Service Connect to have configurable health checks similar to ALBs, or to respect the Docker healthchecks which are configured in the task definition.

Which service(s) is this request for? Fargate - specifically Service Connect options.

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard? We run several "big" services which have a long startup time (10 to 60 seconds). These services communicate privately using Service Connect.

We noticed that we were getting served 503s during deploys or container restarts.

After some back and forth with AWS Support we were able to establish the following sequence of events:

  1. Deployment starts
  2. All containers in the task start booting
  3. Service Connect sidecar is marked "HEALTHY" in the task
  4. Clients start receiving 503 responses
  5. The main app container finishes booting and is also marked "HEALTHY".
  6. 503's stop

I received the following guidance on this from AWS Support:

Service Connect registers the task to CloudMap during the ACTIVATING stage of the task lifecycle [1], and from my testing traffic is sent to the new task as soon as the task enters RUNNING status. ... Based on my testing, it appears that Service Connect does not wait for the container to enter into "HEALTHY" status before sending traffic.

From our POV we would like one of two things to be true here.

  1. Service Connect waits for the app container to be marked as "HEALTHY" by the task before routing traffic to it. OR
  2. Service Connect provides a way of configuring a health check endpoint.

The fact that it is currently simply routing traffic to a task as soon as the Envoy sidecar becomes healthy means we need to do some pretty aggressive retries in the client applications, which works to paper over the cracks but can still lead to failure.

Are you currently working around this issue? Yes. A combination of aggressive retries and long Docker health checks has proven effective.

We received the following guidance from AWS Support:

Configure a container health check in the ECS task definition with startPeriod 60. Due to the startPeriod setting, although the new task will start in UNHEALTHY, ECS does not replace the task for 60 seconds. At the same time, the old task is kept alive. Service Connect has both tasks registered in CloudMap and will send traffic to both using round-robin.

This solution "works" but is merely a sticking plaster - it can still lead to failed requests and needlessly extends deploy / restart times.

kshivaz commented 6 months ago

Thanks for this request. We’d like to check into the behavior you saw more thoroughly. Could you share your support case ID so we can look at your specific setup?

muzfuz commented 6 months ago

@kshivaz thank you for looking at this. The case ID is 171276418200173.

kshivaz commented 6 months ago

Thanks @muzfuz.

ilkansh8 commented 4 months ago

This issue unfortunately defeats the purpose of using service connect.

krrrr38 commented 4 months ago

case ID 171998743200404 shows CloudMap also doesn't check healthcheck result.

bmariesan commented 3 months ago

@kshivaz I can confirm the same is happening randomly for us during scale in/out or restarts

kshivaz commented 3 months ago

@bmariesan : Please open a support case with the details of your environment / configuration and error logs, so our team can look into it.

gillesbergerp commented 3 months ago

We face the same issue. We have multiple applications (mostly Java and JRuby) deployed that communicate via service connect. During container startup, we frequently see requests hitting a task where the application container is not ready yet

jenademoodley commented 3 months ago

As a way to prevent this from occurring, an additional container can be added to the task definition with a dependency on the application container to be marked as HEALTHY (this means that there must be a health check defined for the application container). The container should be marked non-essential and designed to exit.

How this works is that ECS would only transition a task to RUNNING state only if all containers in the task have started. This method prevents the task from reaching RUNNING state until the application container is healthy.

Tested this approach using a container which intentionally calls sleep for 60 seconds before starting the webserver process, and used an additional non-essential alpine container. Without the additional container, 503s are returned as expected during a deployment. With the container, no 503s are observed.

Seiya6329 commented 2 months ago

Thanks @jenademoodley for posting the workaround! -

We (thanks to @rishabhpar) have also validated that this workaround works and effective. Please find the workaround guideline below

Steps to mitigate the problem:

bmariesan commented 2 months ago

Thanks @jenademoodley for posting the workaround! -

We (thanks to @rishabhpar) have also validated that this workaround works and effective. Please find the workaround guideline below

Steps to mitigate the problem:

"healthCheck":{
      "command":[
         "CMD-SHELL",
         "curl -f http://localhost/ || exit 1"
      ],
      "interval":30,
      "timeout":5,
      "retries":3,
      "startPeriod":60
   }

This is adjustable to your preferences.

  • Add a second container to the list of container definitions. This will be a dummy container designed to exit and not consume resources. It will only spinup once the main container is healthy. See the dependsOn section
{
   "name":"serviceconnecthold",
   "image":"public.ecr.aws/docker/library/alpine:edge",
   "cpu":0,
   "portMappings":[],
   "essential":false,
   "environment":[],
   "environmentFiles":[],
   "mountPoints":[],
   "volumesFrom":[],
   "dependsOn":[
      {
         "containerName":"<---NAME OF THE MAIN CONTAINER--->",
         "condition":"HEALTHY"
      }
   ],
   "systemControls":[]
}
  • Submit the new task definition revision
  • Update the service with the new task definition revision.

I can also confirm that the workaround above works like a charm

jamesmudd commented 1 month ago

Also confirm this workaround works. Would be great if AWS could implement a proper fix though

thiagoscodelerae commented 1 month ago

@muzfuz and all, is the service-connect container being marked as "unhealthy" in your case?

I have several ECS tasks running on EC2 using ECS service connect for internal communication. Sometimes, during new deployments, the ECS service connect container linked to these tasks becomes unhealthy, preventing the deployment from succeeding. This issue doesn't occur with every deployment.

These ECS tasks are GPU-based and take some time to start. I don't have any health check configured for the task definitions.

muzfuz commented 3 weeks ago

I can confirm, this is very much still an issue cc @thiagoscodelerae .

I'll use the above mentioned ~hack~ workaround for now, thank you @jenademoodley 😄