Closed davidemms closed 4 years ago
Updated version In the updated version OrthoFinder will detect the issue at the start of a run and print a message more or less the same as the steps described here:
If you are running an analysis on n species then OrthoFinder will need to be able to open approximately r=n^2+50 files. The n-squared files are the .tsv files of orthologs that are updated with the orthologs as each new gene tree is analysed. The 50 is for extra files opened by the process. As I don't understand the full intricacies of how how many files linux opens to run a process I'd suggest being safe and using a number considerably bigger than 50.
I'll use r=1000000 in the steps below
ulimit -Hn
ulimit -Sn
ulimit -n 1000000
emms hard nofile 1000000
emms soft nofile 1000000
(edit these lines to match your username)ulimit -Sn
There's also lots of information online about ulimit & adjusting the number of open files limit: https://www.google.co.uk/search?q=linux+ulimit+nofile
Update:
I've recently found cases where the above changes on linux don't initially work and calling ulimit -n
continues to show the old limit rather than the new, higher limit. This looks like it's an operating system error which should hopefully be resolved at some point. A workaround for me was to use the linux su
command to 'switch' user to myself even though the current user on the terminal was already me. For some reason this works to update the nofile limit when things like restarting the computer didn't.
My username is 'emms' so I ran
su emms
and provided my password at the prompt. Then when I called
ulimit -n
I saw that the limit had been successfully updated.
@davidemms Hi David, I was having the open limit error on our cluster, but my ulimit commands give back a soft and hard limit of 131072. _ERROR: The system limits on the number of files a process can open is too low. For 214 species OrthoFinder needs to be able t o open at least r=45896 files. Please increase the limit and restart OrthoFinder
The administrator answered that they never changed the open limit. What could cause this problem?
Thanks,
FIXED. I fixed the problem with ulimit. This problem comes with qsub command sge module, applying different file open limits from the one assigned to my account. I managed to run it with a direct command run.
Hi,David I encountered the same error, but I don’t have administrator rights. Is there any way to run the software with the maximum number of open files of 1024?
ERROR: The system limits on the number of files a process can open is too low. For 176 species OrthoFinder needs to be able to open at least r=31076 files. Please increase the limit and restart OrthoFinder
Xinlong
Here's the error:
ERROR: The system limits on the number of files a process can open is too low. For 33 species OrthoFinder needs to be able to open at least r=1189 files. Please increase the limit and restart OrthoFinder
1. Check the hard and soft limits on the number of open files for your system:
$ ulimit -Hn
$ ulimit -Sn
2. If hard limit, h > r already, then you just need to increase the soft limit:
$ ulimit -n 1189
3. Alternatively, if h < r then you need to edit the file '/etc/security/limits.conf', this requires root privileges. To increase the limit to 1189 for user called 'emms' add the lines:
emms hard nofile 1189
emms soft nofile 1189
(edit these lines to match your username)
4. Check the limit has now been updated (if you changed the hard limit you'll need to open a new session and confirm it's updated):
$ ulimit -Sn
5. Once the limit is updated restart OrthoFinder with the original command
For full details see: https://github.com/davidemms/OrthoFinder/issues/384
I'm developing a pipeline that uses orthofinder
as one of its step and will be run via qsub
on our grid. I had a few questions:
r
specific to my run or is it hardcoded for all users as an example? h
the hard limit or soft limit? ulimit -Hn
the hard limit and ulimit -Sn
the soft limit?
bash-4.2$ ulimit -Hn
4096 # Is this the hard limit?
bash-4.2$ ulimit -Sn
1024 # Is this the soft limit?
ulimit -n
? ulimit -n 1189
reset after starting a new ssh
session? For those running Orthofinder
on CentOS Linux release 7.9.2009 (Core)
with a demand of r>1024*1024 (=1048576)
, a previous step is required before editing the file /etc/security/limits.conf
as root. The number you set in /etc/security/limits.conf
for nofile
(number of open files) cannot be greater than the value found in /proc/sys/fs/nr_open
[2]
[thiagogenez@login001 ~]$ cat /proc/sys/fs/nr_open
1048576
[thiagogenez@login001 ~]$ ulimit -Hn
4096
[thiagogenez@login001 ~]$ ulimit -Sn
1024
To increase the number in /proc/sys/fs/nr_open
to, for instance, 1186021 > 1048576
, run the following command line with root privilegies:
[centos@login001 ~]$ sudo sysctl -w fs.nr_open=1186021
fs.nr_open = 1186021
Then, edit the file /etc/security/limits.conf
with root privileges:
thiagogenez hard nofile 1186021
thiagogenez soft nofile 1186021
Finally checking the results:
[thiagogenez@login001 ~]$ ulimit -Sn
1186021
[thiagogenez@login001 ~]$ ulimit -Hn
1186021
PS: I haven't rebooted the machine to check if the value in /proc/sys/fs/nr_open
remains intact.
Sources: [1] https://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/ [2] https://serverfault.com/questions/583820/cant-log-in-when-nofile-is-set-to-unlimited-in-etc-security-limits-conf
Issue If the system limits on the number of files that a user can open is too low then OrthoFinder is unable to open all the required files and it fails late in the run. Instead it should fail immediately so that the user can adjust the limits and thus fix the problem immediately.
Error message The error produced if not enough files can be opened is of the form:
Or, potentially:
Work around If this occurs then take the following steps:
python orthofinder.py -ft OrthoFinder/Results_May06_11/
(In the updated version of OrthoFinder this problem will be identified right at the start rather than at the "Reconciling gene and species trees" stage. It will print a message detailing the steps that need to be taken. If this occurs you will need to update the limits and rerun the original command rather than use the '-ft' option.)