Closed scintilla9 closed 1 month ago
I think we've seen this issue before. @adamnovak does this seem familiar?
I think the issue I'm thinking of is https://github.com/DataBiosphere/toil/issues/3573 and https://github.com/ComparativeGenomicsToolkit/cactus/issues/462
Toil has been passing an OMP_NUM_THREADS
to each job individually since 5.5.0, so if the Toil here is newer than that we shouldn't have the same problem with all the single-machine jobs thinking they can have one thread per core on the machine.
@scintilla9 what is your ulimit -u
value (which would be the maximum number of threads you are allowed)? And how does that compare to what nproc
says for the number of cores that are in the system?
It looks like Toil is failing to start one of its internal threads before it even gets around to making jobs that use threads. Are you running anything else on this machine that could be eating into your thread limit? Did you like start a previous Toil run and somehow leave processes running?
Here's the information:
ulimit -u = 3095605
, nproc = 48
and ulimit -a
:
core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 3095605 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 200000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 3095605 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
I am running another catcus (an older version) on the machine locally, but I've tried stop the job and run a new cactus (latest version) in docker, the error still happened. In fact the older version does not occupy the resources, it only take 1 thread when running the process even I set --defaultCores 40
and --maxCores 40
. That is why I want to change to the latest version, but I am not sure if that affect the thread limit.
Do I need to increase max user processes
? It seems already a huge value.
Yeah, that looks big enough. Apparently there is also a system-wide limit you can check with cat /proc/sys/kernel/threads-max
, but I don't think that's your problem.
Do you happen to be using Docker 20.10.9 (or older)? That version causes problems when newer containers try to start threads because it doesn't know about and thus forbids some of the syscalls they try to use, and the Cactus Docker images are on Ubuntu 22.04 so they would presumably be new enough to hit that bug.
Hi @adamnovak
Thanks for reply.
cat /proc/sys/kernel/threads-max
shows 6191210.
And yes, my docker version is 18.09, so this might be the reason.
Now I've built cactus 2.8.2 from pre-compile binary, and ran without error so far.
BTW, the multiple cores lastz only works when GPU available, right?
BTW, the multiple cores lastz only works when GPU available, right?
I feel like multiple cores and using GPUs are independent features, but @glennhickey would know for sure.
BTW, the multiple cores lastz only works when GPU available, right?
yes
--lastzCores LASTZCORES
Number of cores for each lastz/segalign job, only
relevant when running with --gpu
Thanks for clarifying.
Hi,
I'm trying to using the latest version of cactus (2.8.2) in docker. At first a numpy error which was solved by using
export OMP_NUM_THREADS=1
(suggestion from https://github.com/bcgsc/mavis/issues/185)Then another error came up:
My command is:
cactus ./js/ cactus.txt cactus.hal --defaultCores 40 --maxCores 40 --defaultMemory 512G --maxMemory 700G --defaultDisk 100G --maxDisk 500G --lastzCore 40 --lastzMemory 256G
Any suggestions are appreciated