elsevier-research / wercker-aws-ecs

5 stars 24 forks source link

Step 6: Update ECS Service times out #4

Closed niiamon closed 9 years ago

niiamon commented 9 years ago

First got this:

Command timed out after no response

Then this after I increased no-response-timeout:

Step 6: Update ECS Service

✖ Waiter ServicesStable failed: Max attempts exceeded

TomFrost commented 9 years ago

Hey @niiamon, I ran into this same thing. For me, the issue turned out to be that the EC2 node running the ECS agent didn't have the appropriate Dockerhub authentication set up on it to download my image. If you SSH into your EC2 node(s) and cat /etc/ecs/ecs.config doesn't show you the credentials you need for your Docker registry, you're likely having the same problem.

Here's Amazon's page on how to correct that: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth.html

(Though I agree, error reporting on this one could be far better)

niiamon commented 9 years ago

@TomFrost Fortunately, I have the required setup and my ecs.config is pulled in from an S3 bucket. I've been tailing the ecs-agent logs at /var/log/ecs and Iv'e spotted this:

unable to place a task because the resources could not be found.

The question is what resource is it talking about? A port? Debugging some more and will report back here with what I find.

TomFrost commented 9 years ago

Unfortunately, ECS does not have the ability to put two of the same tasks on the same machine if they expose the same port -- or for that matter, two different tasks that expose the same port. You'll also see that message if you don't have a machine in the cluster with enough RAM or CPU according to your task definition.

niiamon commented 9 years ago

@TomFrost You're indeed right. The error wasn't a consistent but more sporadic in nature. So I had some previous tasks which had failed booting up and the agent kept trying to bring them up and since all the tasks were using the same port, that happened.

Got that sorted. I am now deciding how best to expose environment variables from wercker to the docker image that gets pushed to Docker Hub. Any ideas?

TomFrost commented 9 years ago

@niiamon Ideally the internal/docker-push step would let you specify environment variables, but in lieu of that there are two options that I see:

  1. If your entrypoint is a shell interpreter like sh -c, then you can prefix your command with either a chain of VAR_NAME="$WERCKER_VAR" or export VAR_NAME="$WERCKER_VAR" &&.
  2. You can add an array of environment variables to the task definition json that you use in the aws-ecs deployment step. Normally these would be static, such as:
"environment": [
  { "name": "APP_ENVIRONMENT", "value": "production" },
  { "name": "LOG_LEVEL", "value": "warn" }
]

but you could replace those values with placeholders like this:

"environment": [
  { "name": "APP_ENVIRONMENT", "value": "%APP_ENVIRONMENT%" },
  { "name": "LOG_LEVEL", "value": "%LOG_LEVEL%" }
]

and then, before your ecs deployment, add a script step to your deployment pipeline that uses sed to write a new json file, replacing those values with the environment variables in Wercker.

The latter is more complex, but certainly cleaner from the standpoint of being able to use the same container in your docker repository in different environments.

niiamon commented 9 years ago

I chose the second approach and it works quite well. Many thanks for the help @TomFrost :smile: