Closed acesnik closed 5 years ago
Oh, I see from tracing necklace.groovy that the reason for this is here: https://github.com/Oshlack/necklace/blob/master/necklace.groovy#L65
Actually, it might be here, too: https://github.com/Oshlack/necklace/blob/master/bpipe_stages/genome_guided_assembly.groovy#L32
Hi,
Thanks for trying our pipeline!
Necklace runs HISAT2 and Trinity in parallel and gives just one thread to HISAT2 and all the rest to Trinity. This usually makes sense since in almost all cases HISAT2 will finish long before Trinity does. You could try specifying the number of threads for HISAT2 specifically with the option:
-p hisat2_options="-p <>"
See https://github.com/Oshlack/necklace/wiki/Options
In this case, you'd just need to be aware that the maximum number running on your machine is likely to be:
The mapping and counting steps which happen later on in the pipeline (the ones you've highlighted) should use the correct number of threads I think. The mapping is done for each sample in parallel and so the threads per process is divided by the samples (roughtly) and the counting should use all threads.
Hope this helps and I might update the documentation at some stage to make it clearer.
Cheers, Nadia.
Hi there,
This is an interesting tool. Thanks for developing it and making it open source! I'm giving it a shot on some data we're interested in (PC3 prostate cancer cell line data).
I'm finding that the number of threads isn't passed into the HISAT2 command, so it's taking quite a while, even though I see from your code that it should be.
AC
Here's the command log for my test run: