Closed killianmuldoon closed 7 months ago
This sounds identical to https://github.com/docker-library/rabbitmq/issues/545. The cause is that fedora and other rpm-based distros set an astronomically large value for open files (1073741816
vs 65536
). So, if you are running on a rpm-based OS that sets an extremely high open files limit, then you need to set --ulimit nofile=
to a more reasonable value.
I think the root cause is HAProxy allocating resources for each connection, up to the maximum, and deriving that maximum (maxconn
) from the (very high) kernel default file descriptor limit, which is the effective limit when the container runtime file limit is infinity
.
If your platform only supports select and reports "select FAILED" on startup, you need to reduce maxconn until it works (slightly below 500 in general). If this value is not set, it will automatically be calculated based on the current file descriptors limit reported by the "ulimit -n" command, possibly reduced to a lower value if a memory limit is enforced, based on the buffer size, memory allocated to compression, SSL cache size, and use or not of SSL and the associated maxsslconn (which can also be automatic). -- https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#maxconn
Making a note for folks who end up here via kubernetes:
Looks like kubernetes relies on this to be fixed at the container service level, in my case this is containerd fixed like this:
# sed -i 's/LimitNOFILE=infinity/LimitNOFILE=65535/' /usr/lib/systemd/system/containerd.service
# systemctl daemon-reload
# systemctl restart containerd
# k delete deployment <asdf>
When starting the haproxy image it gets OOM-killed after using up all the memory on my system (32GB + 8GB Swap) almost immediately.
I'm running using the below command - where the config file is this one
(Note I've set a 1GB memory limit on the above to demonstrate the problem so anyone trying to replicate doesn't get exhausted completely)
I've tested this behaviour on all versions back to the haproxy 2.2 image.
This issue can be resolved by setting ulimits as below:
Or by setting the connection limit e.g.
The issue looks similar to the one investigated and closed in https://github.com/haproxy/haproxy/issues/1751
I'm wondering why the haproxy docker image might use up so much memory on startup if those limits aren't set and if this is just a docker issue, or related to the binary itself. I wasn't able to replicate this using the 2.4 version of haproxy on the same system.
SYSTEM INFORMATION
Docker version
Fedora version
Kernel version