gefyra-run starts local containers with no constraints on cpu or memory usage. This means that any such container by default can consume unlimited cpu and memory and thus can make the entire docker host (with all other containers/services) unresponsive.
The other problem is that cpu/memory problems may stay undiscovered in gefyra until they get deployed in production.
It is good practice to put cpu/memory constraints on containers, in particular when running sets of containers.
cli option ideas:
add options like --cpu <value>, --memory <value> and similar parameters
or even --cpu-from <pod/deployment>, --memory-from <pod/deployment>
or generic design approach: --apply-container-settings-from :<pod/deployment>; e.g. , --apply-container-settings-from cpu,memory:my-pod
Why would such a feature be important to you?
mitigate risk of one container bringing down the entire dev machine
find cpu/mem issues early during dev (with gefyra)
What is the new feature about?
gefyra-run starts local containers with no constraints on cpu or memory usage. This means that any such container by default can consume unlimited cpu and memory and thus can make the entire docker host (with all other containers/services) unresponsive.
The other problem is that cpu/memory problems may stay undiscovered in gefyra until they get deployed in production.
It is good practice to put cpu/memory constraints on containers, in particular when running sets of containers.
cli option ideas:
--cpu <value>
,--memory <value>
and similar parameters--cpu-from <pod/deployment>
,--memory-from <pod/deployment>
:<pod/deployment>; e.g. ,
--apply-container-settings-from cpu,memory:my-pod
Why would such a feature be important to you?
Anything else we need to know?
No response