Closed simonbray closed 12 months ago
We put in a limit in place on your servers to restrict abuse. What do you think is a better limit?
ping @sj213
I was wondering, independently of individual server configuration, if there was a way to rewrite the tool so not every file has to be open at once.
We put in a limit in place on your servers to restrict abuse. What do you think is a better limit?
I guess your limit is fine, if I am the first person to complain, and I'm not even using EU any more :)
OSError: [Errno 24] Too many open files
The limit on the number of open file descriptors is 32768 (see /proc/
PID/limits
). This is AFAIK the operating system default and it is quite generous. I'd consider any program that actually keeps that many files open seriously broken. (Maybe the tool in question never calls close(2)
on files opened...???)
This is AFAIK the operating system default
Correction: it is not. The default soft limit on open FDs is 1024 in all Linux kernels I checked. The default hard limit varies, it apparently used to be 4096 in 3.x-kernels, in Redhat-4.x-kernels I saw 262,144 and at least on Ubuntu since 4.x it is a whopping 1,048,576 - but note that a process would have to call setrlimit(2)
first in order to go beyond 1024. I still fail to see what a valid use case for one million files open concurrently might be...
Yes, I am not using EU, but I am hitting the limit at 1024 files on our server as well, as I wrote in the issue title.
I don't think changing the open file limits is necessary or sensible. I just made the issue in case anyone has the time and enthusiasm to optimize the Galaxy tool a bit. As far as I'm concerned, I already found a workaround.