Open jandubois opened 3 years ago
I'm unable to run a container with nerdctl
due to a "Too many open files" error in my app.
This enhancement should be part of the user's VM configuration to allow bigger applications to use it.
This bug is really about the limit inside containers scheduled by kubernetes.
For running containers directly via nerdctl
you can specify the limit on the commandline:
$ nerdctl run --rm alpine sh -c "ulimit -n"
1024
$ nerdctl run --ulimit nofile=4096:4096 --rm alpine sh -c "ulimit -n"
4096
$ nerdctl run --ulimit nofile=8192:8192 --rm alpine sh -c "ulimit -n"
8192
Trying rancher-desktop and hit these errors with OOTB Helm charts for these applications ..
ElasticSEarch
1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/elasticsearch.log
Neo4J
WARNING: Max 1024 open files allowed, minimum of 40000 recommended. See the Neo4j manual.
RabbitMQ
rabbitmq 17:46:43.55 INFO ==> Initializing RabbitMQ...
/opt/bitnami/scripts/librabbitmq.sh: line 750: ulimit: open files: cannot modify limit: Operation not permitted
Same issue here while I install tidb by helm with helm install tidb-cluster pingcap/tidb-cluster --version=v1.2.6 --namespace=tidb-cluster
then got a CrashLoopBackOff about tikv:
[2022/01/10 15:23:41.864 +00:00] [FATAL] [server.rs:1102] ["the maximum number of open file descriptors is too small, got 1024, expect greater or equal to 82920"]
What should I do?
Apple M1 Max
Since nobody indicated which platform they are using, I'm just assuming macOS (or Linux) now. If you are on Windows, this will not work:
Create an ~/Library/Application Support/rancher-desktop/lima/_config/override.yaml
file with a provisioning script:
provision:
- mode: system
script: |
#!/bin/sh
cat <<'EOF' > /etc/security/limits.d/rancher-desktop.conf
* soft nofile 82920
* hard nofile 82920
EOF
Stop and restart Rancher Desktop, and you should have updated limits in your containers. I've verified this with RabbitMQ; after the restart the container started up automatically.
I've also checked the dockerd
configuration, which seems to have a larger nofile
limit by default.
Thx @jandubois this workarround works for Elasticsearch, too @stephenpope
Thx @jandubois this workarround works for Elasticsearch, too @stephenpope
I thought for Elastic you also had to increase the vm.max_map_count
setting. So for anybody else finding this issue, add
sysctl -w vm.max_map_count=262144
to the provisioning script if you need to update the count for Elastic.
Tested with RD 1.0.0
and this gets everything I needed running (Elastic/RabbitMQ/Neo4J/SQL Server) 🥳 .. i had to increase the limit to 1000121
for HAProxy if anyone is keeping track of these values :)
Hi all,
Same issue but in Windows... It will be really appreciate any help...
Bests
I got it working on windows creating a file at %AppData%\rancher-desktop\provisioning\map_count.start
with content:
#!/bin/sh
sysctl -w vm.max_map_count=262144
The just close and start the rancher desktop again.
Report on user-slack: https://rancher-users.slack.com/archives/C0200L1N1MM/p1633033996169800
Default setting for
ulimit -n
is 1024, same as in most distros. Should we increase it?