Closed LedgeDash closed 5 years ago
By default, ulimit on most linux's limits open file descriptors to something like 1K or 4K. Any chance we're hitting up against that limit?
(seems that is what's happening here. The soft limit on my machine is 1024. just increased it to 10K. No panic so far. workload is taking a long time to run though).
Seems that the cause is Linux's limit. Followed this post in increasing the number of open files.
So the fix is to config the machine and increase nofile
limit.
Command run:
example_requests.json
is generated bygenerator.py
to try to saturate my cloudlab node with our full set of applications. The machine has 164GB of memory, with 4 GB reserved (not available to VMs), it can run 1250 128MB-VMs concurrently. But VM creation fails after the workload runs for a few seconds. All of them point to the problem of "Too many open files". Here's the trace:stderr
is gabbled with multiple threads writing to it at the same time. But here are a few other error message: