Closed robbyt closed 8 years ago
You're correct that there's currently no way to disable this requirement.
There is some discussion in #124 about memory limits and we're internally tracking the feature request of percentage / proportional memory limits.
Would being able to specify a proportional amount of memory and/or being able to overcommit memory resolve the issue for you? Do you want to just be able to opt out of setting memory and having it tracked/scheduled upon at all?
@euank yes, a percentage/proportional amount of memory would be the best solution. I can imagine that overcommit might confuse people.
Disabling the memory setting requirement would be a quick fix for my use case.
i'm also interested in this discussion.
:thumbsup:
Having this is kind of a deal-breaker for us.
We're running multiple services in our cluster. Each service by itself usually uses very little memory but occasionally spikes while processing a request. When it spikes, it needs nearly the entire instance's memory. However, services never spike at the same time.
As it is, there doesn't seem to be any way to provision this kind of setup using ECS.
Being able to disable resource limits would be hugely beneficial for us.
We need this too, because the OS enables the oom-killer when containers go above the threshold, and the oom-killer implements an heuristic to kill containers "randomly" (and even processes in the host, not only containers!). This happened us twice: a machine (host) stopped responding because a container went above the threshold!
+1. Need this
+1. Need this
:+1: Please make the memory limit optional. I have a similar situation as described above; services can spike but not simultaneously.
+1
+1 It's often the case that we run 1 task per EC2 instance and want the container to simply have access to all the memory
+1
+1 Please allow us to skip memory limit setting. This is default docker behavior!
+1
:+1:
+1
+1 for opting out of memory limit enforcement - I want to tightly pack services onto a small "dev" ec2 instance for debugging purposes
+1
+1
+1 In Docker there is everything for our request https://docs.docker.com/engine/reference/run/#memory-constraints
+1
+1
+1
+1
+1
+1
I'm not sure if this is so easy for them. Having a memory limit lets them figure out where a container will fit neatly in a cluster.
Almost 6 moths passed since robbyt opened this issue...
+1
+1
+1.. so sad that such issues opened that long ago without any movements
+1
+1
+1 At least allow different settings for scheduling purposes and the oom kill threshold
yes, 2 different settings are the best solution
After an internal discussion in our company, the lack of this feature seem to be the KO for using ECS. I suggest to have two memory limits, one for the scheduling algorithm, which is required and the other for the Docker cgroup, which can be => the former or left out entirely.
+1
+1
+1
+1
+1.
+1
+1
+1. This is crazy disappointing. Just spent a couple weeks of development time switching over to ECS... only to find out that our main use case (HPC) won't work... We had our first process running after a 13 hour day coding... and it was killed by memory management.
+1
+1
+1
+1
Any news on this?
:+1: Same issue as others. Services need to be able to spike but normally use much less memory. This is an absolute deal breaker.
+1
@samuelkarp @euank How many more +1s does this need before we get a response or a solution?
According to the docs1, the per-container memory limit is required. This is inconvenient, because I use the same ElasticBeanstalk
Dockerrun.aws.json
file with multiple instance types.Is there a way to disable this setting?