aws / amazon-ecs-agent

Amazon Elastic Container Service Agent
http://aws.amazon.com/ecs/
Apache License 2.0
2.08k stars 609 forks source link

Way to disable per-container memory limit in task definition #155

Closed robbyt closed 8 years ago

robbyt commented 9 years ago

According to the docs1, the per-container memory limit is required. This is inconvenient, because I use the same ElasticBeanstalk Dockerrun.aws.json file with multiple instance types.

Is there a way to disable this setting?

samuelkarp commented 8 years ago

As a general rule, we don't comment on the future direction of the service, but the +1s and use case details help us understand your needs.

mixja commented 8 years ago

One other related annoyance is that non-essential tasks in your Task Definition are counted in your memory allocation, regardless of whether they are actually running or not. I have a use case for a non-essential task that needs to run when other services in the task definition start, which then exists and never runs again. So I have to allocate enough memory for the task to run and this is wasted memory I can't use anywhere else.

devotox commented 8 years ago

@mixja I had the same problem and my situation may be different from yours but i fixed it by having another container build what is needed in that container. Since the container doesnt need to actually run i had the minimal memory set on there and then used the volumesFrom on my api container and had the api container then build the application. Hope that helps

matthill commented 8 years ago

+1

After building out a testing environment, we ran into this issue and were forced to use Elastic Beanstalk instead of ECS.

hedless commented 8 years ago

+1

oddurmagg commented 8 years ago

+1

Best solution seems to be separating it out to 2 configuration values, one for scheduling and one for the cgroup (optional). Default behaviour could be to have the same values but allow people to override if needed.

Also, being able to specify the cgroup value as a % of the RAM of the host would be awesome.

When used from elastic beanstalk with multidocker setup ( which will only ever deploy a single task config to a single node), allocating a total more that 100% of the host should not be possible.

ooleem commented 8 years ago

+1

reidwelliver commented 8 years ago

+1 - It seems like making this setting optional would be the most flexible way of handling everyone's use cases.

DrMegavolt commented 8 years ago

+1

ceecer1 commented 8 years ago

+1

erickponce commented 8 years ago

+1

amanzyuk commented 8 years ago

+1

DarkCenobyte commented 8 years ago

+1

atlantux commented 8 years ago

+1

bfil commented 8 years ago

I've been trying to migrate to multi-docker container to be able to support multiple port mappings (since single docker containers don't allow more than one mapping...) and just found out about this issue which also prevents me from using the service.

Usually we run larger instances on production, but this forces us to specify the memory requirements in our Dockerrun file, which is usually published as a single file for each new version of the service running. This would force us to either align all environments to have the same instance sizes, or publish different Dockerrun files with the same Docker image version in it just to handle the different instance sizes based on our environments requirements.

I would expect the memory requirements to be optional or at least support percentage values.

vinayan3 commented 8 years ago

+1. I use ECS with various size machines. I have to update the task definition all the time or have different ones for different machines. A % would work better and make my life easier.

nadavshatz commented 8 years ago

+1

theikkila commented 8 years ago

+1

josephsiefers commented 8 years ago

+1

nealhardesty commented 8 years ago

+1

sirbotta commented 8 years ago

+1

djtarazona commented 8 years ago

+1

magnus-larsson commented 8 years ago

+1

mattijsf commented 8 years ago

+1

vinayan3 commented 8 years ago

This is hopeless. It's been almost a year.

bilalaslamseattle commented 8 years ago

@vinayan3 yep. We moved off to Kubernetes. Never looked back. Between agent disconnects, general lack of 12-factor design and so much more, moving off of ECS is a decision I don't regret.

vad commented 8 years ago

ECS is hopeless! It's a dead project. My hope is that AWS will move to Kubernetes too (in some way: it should be easier to use it) or GCE will "crush" AWS

toooni commented 8 years ago

This looks promising: https://blog.docker.com/2016/06/azure-aws-beta/ - On the bottom of the page you can sign up for the beta.

ynelin commented 8 years ago

Same here :). Moved part of the services to gcloud already

pieterlange commented 8 years ago

FWIW, december 2015 i was tasked to setup a container hosting platform and devs were already using ECS on CoreOS. I briefly thought about moving forward with ECS but between agent disconnects and general limitations/feature incompleteness of the platform the decision to move to kubernetes was pretty easy. Never looked back. I'm not sure if amazon even cares about ECS. It seems to get the same support as their elasticsearch service (supporting existing users, but discouraging any new adopters). Maybe these new technologies are still too much of a moving target to commoditize.

samuelkarp commented 8 years ago

Hey everyone,

We wanted to let you know a bit about how we're planing to address this feature request.

As we understand it, the problem most people here seem to be facing is related to the fact that we use the memory parameter of a task definition to mean two different things:

  1. A placement constraint to prevent ECS from placing applications together that require more memory in total than is available on the host
  2. A hard limit, enforced by the Linux kernel, of how much memory can be allocated by the cgroup of each container

Based on the feedback we see in this thread and other customer feedback, it sounds like applications which temporarily spike in memory usage but require less at steady-state are not well served by this situation; even though they may require memory temporarily, there is a desire to set the placement based on the steady-state memory requirements rather than the spikes.

We'd like to separate placement aspects from hard limits, while preserving the ability to properly schedule and to influence applications to behave properly. Separating placement aspects is something we can do in ECS by adjusting the task definition and we think we can accomplish the second goal of influencing application behavior through memory reservation. From the Docker docs:

Memory reservation is a kind of memory soft limit that allows for greater sharing of memory. Under normal circumstances, containers can use as much of the memory as needed and are constrained only by the hard limits set with the -m/--memory option. When memory reservation is set, Docker detects memory contention or low memory and forces containers to restrict their consumption to a reservation limit. [...] Memory reservation is a soft-limit feature and does not guarantee the limit won’t be exceeded. Instead, the feature attempts to ensure that, when memory is heavily contended for, memory is allocated based on the reservation hints/setup.

We're thinking about implementing this:

For example, the task definition would be updated as follows:

{
  "containerDefinitions": [
    {
      "name": "mycontainer",
      "memoryReservation": 10,
      ...
    },
    ...
  ],
  ...

This would translate to the Docker HostConfig including:

{
    ...
    "HostConfig": {
        ...
        "MemoryReservation": 10485760,
        ...
    },
    ...
}

Please let us know if this does not address your concerns.

Thanks, Sam

djenriquez commented 8 years ago

Great solution!

magnus-larsson commented 8 years ago

Looks useful to me!

Will it be available for evaluation in near time, e.g. in a test environment or alike?

tallavi commented 8 years ago

Please let us know if this does not address your concerns.

Hi Sam,

Thanks for addressing this. I'm using compose files to deploy containers. How will this configuration option be available to users like me? What will the current mem_limit field in the yaml file will be used for after this feature is implemented?

rmbrich commented 8 years ago

Sam - the other important aspect here is the desire to set memory by percentage - i.e. container 1 fills 75% of host, container 2 fills 10%. This is helpful when changing host VM sizes; you don't have to then manually change all Dockerrun files. See first response to OP by "euank".

PepijnK commented 8 years ago

the other important aspect here is the desire to set memory by percentage

So if 2 tasks require 60% they would be run on 2 different instances? This could only work if you have homogenous instances.

samuelkarp commented 8 years ago

@djenriquez and @magnus-larsson, I'm glad this will work for you! @magnus-larsson, we are working on the timelines and will announce it here soon.

samuelkarp commented 8 years ago

@tallavi, I looked at the Compose File Reference and it does not appear to yet have support for memory reservation. I think ideally we'd want to keep the same syntax as the Compose File, but I wouldn't want to overload mem_limit to mean two different things and I'd still want to provide the option to set both fields. I don't know if Docker Compose has plans to support memory reservation. @uttarasridhar, what do you think about this?

samuelkarp commented 8 years ago

@rmbrich, thanks for the feedback! From reading the comments here, it looks like the request for a percentage-based memory limit seems mostly related to people using Elastic Beanstalk's multi-container support, which uses ECS for managing the containers. With Elastic Beanstalk, the model is to use separate instances per application such that each task run through ECS runs on the instance by itself. In this case, a percentage-based approach seems useful: since only one task ever runs and since the size of the instances is homogeneous in the cluster, you're only defining memory limits between containers defined in the same task rather than limits between different tasks. I think the solution we proposed above allows this use case though: you set memoryReservation to fit the smallest instance you expect to run on and leave memory unspecified; containers will be scheduled to the instances and won't be limited by a memory limit intended for a smaller instance when you use a larger instance.

In general I think a percentage-based approach can actually have some really surprising behavior. If you were to take a task definition with memory specified as a percentage (say, 20%) and run it in a heterogeneous cluster, the actual amount of memory available to the containers would vary based on where the task was placed. I don't think this is expected that a container launched on a t2.micro might have 20% of 1 GB but that same container launched on a c4.4xlarge would end up with 20% of 30 GB.

jmenga commented 8 years ago

Our deployment model is using EC2 Container Service and instances are homogenous for a given application in a given environment. Therefore the percentage approach would be very useful, but we are already dealing with it now - having an absolute memoryReservation as proposed is a major step forward so enough consultation and go forth and deliver the feature :)

tallavi commented 8 years ago

@tallavi, I looked at the Compose File Reference and it does not appear to yet have support for memory reservation. I think ideally we'd want to keep the same syntax as the Compose File, but I wouldn't want to overload mem_limit to mean two different things and I'd still want to provide the option to set both fields. I don't know if Docker Compose has plans to support memory reservation. @uttarasridhar, what do you think about this?

@samuelkarp, will it be awful to add this field to the yaml even if the official format doesn't have it? Another posibility I can think of is to disable hard memory limits at task/service scope, meaning, to have an option for the ecs-cli compose command that starts the task/service without memory limits.

benjaminwai commented 8 years ago

+1 memoryReservation.

akvadrako commented 8 years ago

We also need this feature and though the proposed design with memoryReservation would give us enough control, the naming and interaction between the two fields is confusing. More intuitively, I would propose minimum and maximum fields, either like:

{
  "memoryMinimum": 5,
  "memoryMaximum": 10
}

or even better:

{
  "memory": { "min": 5, "max": 10 }
}

So a system must have at least min for the task to be scheduled and that amount will be reserved. Also, the task will be limited to max usage.

tallavi commented 8 years ago

So a system must have at least min for the task to be scheduled and that amount will be reserved. Also, the task will be limited to max usage.

@akvadrako, I agree it's allot clearer like this.

@samuelkarp, please don't leave us - compose users - hanging with no support for this feature!

PepijnK commented 8 years ago

Hmmm, I'm doubting if this feature request is realistic. How would this soft limit work with regards to auto-scaling? When to decide to pop another instance? If the docker host decides to add another container to the instance, would it have to kill and restart the others in order to max-out memory consumption?

I'm actually in favor of a pre-determined bahavior, as all proposals here would imply a dynamic behaviour that you don't want -- especially not in production environments. So I doubt the usefulnes of it since you want to keep dev/test/prod as equal as possible (that's why you chose Docker in the first place).

Off course it sounds very handy as if you don't have to think about memory consumption of your application at all.. Probably most of the time you run your app/webservice on an machine with plenty of memory. Now suddenly you have to think about a max, and that we don't like.

But it is a good thing. For example we found out our backend service requires lot of mem, but never noticed it since it was running on EB with a micro-instance fully at its disposal. It makes you think about the architecture and I'm glad we have that insight now. Tweaking the memory setting for a ECS task is a bit of hit-and-miss, but with enough testing you can find it out.

mpestritto commented 8 years ago

Hi @samuelkarp

Was this just rolled out to the ECS web console? If so, I'm not sure what isn't supporting the change - the ecs-agent or the aws-cli? Do you know which this could be related to?

Thanks.

Parameter validation failed: Unknown parameter in containerDefinitions[0]: "memoryReservation", must be one of: name, image, cpu, memory, links...

screen shot 2016-08-15 at 3 53 37 pm
tedder commented 8 years ago

Sir Jeff put up a post: https://aws.amazon.com/about-aws/whats-new/2016/08/amazon-ec2-container-service-now-supports-networking-modes-and-memory-reservation/

It's explained in the docs: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definitions

Now waiting for it to trickle down to CloudFormation so I can use it.

CameronGo commented 8 years ago

The hard limit memory is still a required field. Need to do some experimentation to try and figure out how these two work together and impact the placement/availability of placement for tasks on a host.

tedder commented 8 years ago

Based on the docs, I don't think setting a memory field will affect placement if the memoryReservation is set.

CameronGo commented 8 years ago

Ahh - I didn't read closely enough. I think you are right. Also, the phrasing of this seems to address my comment:

You must specify a non-zero integer for one or both of memory or memoryReservation in container definitions.