aws / amazon-ecs-agent

Amazon Elastic Container Service Agent
http://aws.amazon.com/ecs/
Apache License 2.0
2.08k stars 609 forks source link

Way to disable per-container memory limit in task definition #155

Closed robbyt closed 8 years ago

robbyt commented 9 years ago

According to the docs1, the per-container memory limit is required. This is inconvenient, because I use the same ElasticBeanstalk Dockerrun.aws.json file with multiple instance types.

Is there a way to disable this setting?

euank commented 9 years ago

You're correct that there's currently no way to disable this requirement.

There is some discussion in #124 about memory limits and we're internally tracking the feature request of percentage / proportional memory limits.

Would being able to specify a proportional amount of memory and/or being able to overcommit memory resolve the issue for you? Do you want to just be able to opt out of setting memory and having it tracked/scheduled upon at all?

robbyt commented 9 years ago

@euank yes, a percentage/proportional amount of memory would be the best solution. I can imagine that overcommit might confuse people.

Disabling the memory setting requirement would be a quick fix for my use case.

0xadada commented 9 years ago

i'm also interested in this discussion.

pmcjury commented 9 years ago

:thumbsup:

morgante commented 9 years ago

Having this is kind of a deal-breaker for us.

We're running multiple services in our cluster. Each service by itself usually uses very little memory but occasionally spikes while processing a request. When it spikes, it needs nearly the entire instance's memory. However, services never spike at the same time.

As it is, there doesn't seem to be any way to provision this kind of setup using ECS.

Being able to disable resource limits would be hugely beneficial for us.

vad commented 9 years ago

We need this too, because the OS enables the oom-killer when containers go above the threshold, and the oom-killer implements an heuristic to kill containers "randomly" (and even processes in the host, not only containers!). This happened us twice: a machine (host) stopped responding because a container went above the threshold!

chenliu0831 commented 9 years ago

+1. Need this

gabrianoo commented 8 years ago

+1. Need this

leftclickben commented 8 years ago

:+1: Please make the memory limit optional. I have a similar situation as described above; services can spike but not simultaneously.

mr337 commented 8 years ago

+1

jawadst commented 8 years ago

+1 It's often the case that we run 1 task per EC2 instance and want the container to simply have access to all the memory

razitz commented 8 years ago

+1

allanharris commented 8 years ago

+1 Please allow us to skip memory limit setting. This is default docker behavior!

parajanoff commented 8 years ago

+1

echery commented 8 years ago

:+1:

TheTweak commented 8 years ago

+1

alexmac commented 8 years ago

+1 for opting out of memory limit enforcement - I want to tightly pack services onto a small "dev" ec2 instance for debugging purposes

bribroder commented 8 years ago

+1

pparth commented 8 years ago

+1

ynelin commented 8 years ago

+1 In Docker there is everything for our request https://docs.docker.com/engine/reference/run/#memory-constraints

kyr7 commented 8 years ago

+1

CameronGo commented 8 years ago

+1

ChrisRut commented 8 years ago

+1

mjaverto commented 8 years ago

+1

rjdavis3 commented 8 years ago

+1

kswope commented 8 years ago

+1

I'm not sure if this is so easy for them. Having a memory limit lets them figure out where a container will fit neatly in a cluster.

allanharris commented 8 years ago

Almost 6 moths passed since robbyt opened this issue...

ccosgroveOtreva commented 8 years ago

+1

jedimonkey commented 8 years ago

+1

aldarund commented 8 years ago

+1.. so sad that such issues opened that long ago without any movements

nanotkarashish commented 8 years ago

+1

tallavi commented 8 years ago

+1

tisinno commented 8 years ago

+1 At least allow different settings for scheduling purposes and the oom kill threshold

vad commented 8 years ago

yes, 2 different settings are the best solution

ghaering commented 8 years ago

After an internal discussion in our company, the lack of this feature seem to be the KO for using ECS. I suggest to have two memory limits, one for the scheduling algorithm, which is required and the other for the Docker cgroup, which can be => the former or left out entirely.

ghost commented 8 years ago

+1

ghost commented 8 years ago

+1

paulrutter commented 8 years ago

+1

mixja commented 8 years ago

+1

bilalaslamseattle commented 8 years ago

+1.

cristianobaptista commented 8 years ago

+1

cchitsiang commented 8 years ago

+1

ryanwalls commented 8 years ago

+1. This is crazy disappointing. Just spent a couple weeks of development time switching over to ECS... only to find out that our main use case (HPC) won't work... We had our first process running after a 13 hour day coding... and it was killed by memory management.

rwyyr commented 8 years ago

+1

JanBednarik commented 8 years ago

+1

toooni commented 8 years ago

+1 hitsat

elHornair commented 8 years ago

+1

Any news on this?

bshelton229 commented 8 years ago

:+1: Same issue as others. Services need to be able to spike but normally use much less memory. This is an absolute deal breaker.

devotox commented 8 years ago

+1

MaerF0x0 commented 8 years ago

@samuelkarp @euank How many more +1s does this need before we get a response or a solution?