Closed liu-zhiyang closed 1 year ago
Hi, thanks for reporting this. I tried to simulate your error:
I checked if it could be caused by --input_format BAM
but it does run on my instance with a test bam file.
I then checked singularity
using --profile singularity
and it also works as expected. I did not encounter any errors.
I then used slurm
to identify if it could be an HPC issue with the command:
/nfs/software/slurm/current/spool/slurmd/job42306/slurm_script run main.nf -profile singularity,test_AA --circle_identifier circle_finder,ampliconarchitect --email [daniel.schreyer@glasgow.ac.uk](mailto:daniel.schreyer@glasgow.ac.uk) --input runs/samplesheet_test_AA.csv --outdir results --aa_data_repo data_repo/ --input_format BAM
Again, no error detected and the pipeline ran successfully.
Therefore, could you try to run
samtools sort -n -@ 6 -o e06588d9bca6003c0f0ec945f109b30a.qname.sorted.bam -T e06588d9bca6003c0f0ec945f109b30a.qname.sorted e06588d9bca6003c0f0ec945f109b30a.md.filtered.sorted.bam
in the work directory /gpfs/share/home/1710305101/testNextflow/circfindertest/work/c4/cd12e5099a0ac7eb86cac6f62d36e8
Let me know if this works. If you get the same error. Can you install a different samtools version. I am unsure where this could come from, but if it works in the work directory then there might be issues with the configuration.
I found an interesting article about exit code 140 (http://research.libd.org/SPEAQeasy/help.html). Could you try and allocate more memory and cpu towards the test run. It is a shot in the dark, but it could be worth it.
Thanks a lot!
@DSchreyer Very thanks for your reply. I have tried to manually run samtools sort -n -@ 6 -o e06588d9bca6003c0f0ec945f109b30a.qname.sorted.bam -T e06588d9bca6003c0f0ec945f109b30a.qname.sorted e06588d9bca6003c0f0ec945f109b30a.md.filtered.sorted.bam
with 6 cores (according to -@
parameter of samtools and ~ 24 GB memory available). And the process was completed sucessfully after more than 16 hours 30 minutes. I checked that SAMTOOLS_SORT
module has the process_medium
label with time limit of 8 hours and considering maxRetries = 1
the time limit will be 16 hours for 2nd attempt. So I think the error above maybe caused by time limit. I will try to add more resources and resume the pipeline.
@DSchreyer I ‘ve tried to allocate more resources (12 cpus and 24h time limit) and completely rerun the pipeline and all works completed successfully. And I believe this is really a resource limit problem. Thank you again for your advice!
That is wonderful. Glad to hear that it was resolved easily.
Description of the bug
Hi, I ran circdna pipeline on bam files using identifier circle_finder, it broken with exit status 140 while
executing process > 'NFCORE_CIRCDNA:CIRCDNA:SAMTOOLS_SORT_QNAME_CF
, and here is the error message:The Command error seems not like an error. How can I solve this problem? Hope for your reply. Thanks!
Command used and terminal output
Relevant files
nextflow.log
System information
nextflow version: 22.10.0.5826 hardware: HPC executor: slurm container: singularity 3.5.2 OS: Red Hat 4.8.5 version of nf-core/circdna: 1.0.1