This is stored as a label when the container is started
When each successive container is started, we sum these labels of the running containers and compare it to the total memory available.
Right now, it just logs a warning if we're over the limit: the next step would be to kill (pause?) containers as necessary.
It could be possible to get the actual memory in use by a container, but:
the API call takes time, and I wouldn't want to block on it,
and it might be lower at one moment, and higher at another: For the others, memory use seems pretty stable, but HiGlass goes up and down.
Also:
Store tool defs for demos as objects with sensible defaults, rather than dicts. There was a lot of repetition, and this is more readable, and if someone mistyped a key, we would catch it.
Setting mem_reservation when containers start up: As I understand it, this is a soft limit that docker only enforces if there is resource contention. Seems like a good idea, but not fixing any definite bug either.
(This is a big step in a new direction, so I'll wait till you both have a chance to comment.)
Right now, it just logs a warning if we're over the limit: the next step would be to kill (pause?) containers as necessary.
It could be possible to get the actual memory in use by a container, but:
Also:
mem_reservation
when containers start up: As I understand it, this is a soft limit that docker only enforces if there is resource contention. Seems like a good idea, but not fixing any definite bug either.(This is a big step in a new direction, so I'll wait till you both have a chance to comment.)