epi2me-labs / wf-pore-c

Other
34 stars 9 forks source link

Process `POREC:pairsToCooler (93)` terminated with an error exit status (137) #78

Open webgbi opened 1 month ago

webgbi commented 1 month ago

Ask away!

Hi, I was trying to run the Pore-c workflow on a local desktop which has 16 cores with 128Gb memory.

I kept getting error at the step pairsToCooler.

Please advise how I could fix this. Thank you!

Here is the log.

This is epi2me-labs/wf-pore-c v1.2.2-g9ce4a1b.

Searching input for [.fastq, .fastq.gz, .fq, .fq.gz] files. executor > local (2060) [51/8fec41] process > POREC:fastcat (1) [100%] 1 of 1 ✔ [b4/e2f93b] process > POREC:index_bam (1) [100%] 1 of 1 ✔ [50/e67ff5] process > POREC:prepare_genome:index_ref_fai (1) [100%] 1 of 1 ✔ [0a/677cc3] process > POREC:prepare_genome:index_ref_mmi (1) [100%] 1 of 1 ✔ [ae/200242] process > POREC:digest_align_annotate (973) [ 68%] 977 of 1428 [- ] process > POREC:haplotag_alignments - [- ] process > POREC:merge_coordsorted_bams - executor > local (2060) [51/8fec41] process > POREC:fastcat (1) [100%] 1 of 1 ✔ [b4/e2f93b] process > POREC:index_bam (1) [100%] 1 of 1 ✔ [50/e67ff5] process > POREC:prepare_genome:index_ref_fai (1) [100%] 1 of 1 ✔ [0a/677cc3] process > POREC:prepare_genome:index_ref_mmi (1) [100%] 1 of 1 ✔ [ae/200242] process > POREC:digest_align_annotate (973) [ 68%] 977 of 1428 [- ] process > POREC:haplotag_alignments - [- ] process > POREC:merge_coordsorted_bams - executor > local (2060) [51/8fec41] process > POREC:fastcat (1) [100%] 1 of 1 ✔ [b4/e2f93b] process > POREC:index_bam (1) [100%] 1 of 1 ✔ [50/e67ff5] process > POREC:prepare_genome:index_ref_fai (1) [100%] 1 of 1 ✔ [0a/677cc3] process > POREC:prepare_genome:index_ref_mmi (1) [100%] 1 of 1 ✔ [ae/200242] process > POREC:digest_align_annotate (973) [ 68%] 977 of 1428 [- ] process > POREC:haplotag_alignments - [- ] process > POREC:merge_coordsorted_bams - [- ] process > POREC:merge_namesorted_bams - [59/edd75e] process > POREC:create_restriction_bed (1) [100%] 1 of 1 ✔ [fc/9a204f] process > POREC:to_pairs_file (977) [100%] 977 of 977 [2d/b54d1a] process > POREC:pairsToCooler (93) [ 8%] 80 of 977, failed: 1 [- ] process > POREC:merge_mcools - [- ] process > POREC:merge_pairs - [- ] process > POREC:merge_pairs_stats - [- ] process > POREC:pair_stats_report - [- ] process > POREC:merge_paired_end_bams - [4a/b373dc] process > POREC:getVersions [100%] 1 of 1 ✔ [fc/bc2f17] process > POREC:getParams [100%] 1 of 1 ✔ [eb/961b6d] process > POREC:makeReport [100%] 1 of 1 ✔ [- ] process > POREC:prepare_hic - [82/ce69f7] process > POREC:collectIngressResultsInDir (1) [100%] 1 of 1 ✔ [- ] process > POREC:get_filtered_out_bam - [44/8533fb] process > publish (1) [100%] 1 of 1 ✔ ERROR ~ Error executing process > 'POREC:pairsToCooler (93)'

Caused by: Process POREC:pairsToCooler (93) terminated with an error exit status (137)

Command executed:

cooler cload pairs -c1 2 -p1 3 -c2 4 -p2 5 fasta.fai:1000 pore-c_dudleya_duplex.pairs.gz pore-c_dudleya_duplex.pairs.cool

Command exit status: 137

sarahjeeeze commented 1 month ago

Hi, thanks for reporting this. Looks like the process runs out of memory and it is set fairly low. Out of interest how large is your input file? You could try adding to the config and resume the workflow


process {
    withName: 'pairsToCooler' {
        memory = 16.GB
    }
}
CarolinaA09 commented 3 weeks ago

Hello,

Thank you for your help, I added those lines to the config. However, I am still getting an error. I have attached the trace.txt trace.txt.

And here is the error:

Plus 2 more processes waiting for tasks… ERROR ~ Error executing process > 'POREC:pairsToCooler (1046)'

Caused by: Process POREC:pairsToCooler (1046) terminated with an error exit status (1)

Command executed:

cooler cload pairs -c1 2 -p1 3 -c2 4 -p2 5 fasta.fai:1000 20240605_Dicanthelium_clandestinum_PoreC.pairs.gz 20240605_Dicanthelium_clandestinum_PoreC.pairs.cool

Command exit status: 1

Command output: (empty)

Command error: INFO: Environment variable SINGULARITYENV_NXF_TASK_WORKDIR is set, but APPTAINERENV_NXF_TASK_WORKDIR is preferred INFO: Environment variable SINGULARITYENV_NXF_DEBUG is set, but APPTAINERENV_NXF_DEBUG is preferred INFO:cooler.create:Writing chunk 0: tmpf_1uggzb.multi.cool::0 INFO:cooler.create:Creating cooler at "tmpf_1uggzb.multi.cool::/0" INFO:cooler.create:Writing chroms INFO:cooler.create:Writing bins INFO:cooler.create:Writing pixels INFO:cooler.create:Writing indexes INFO:cooler.create:Writing info INFO:cooler.create:Merging into 20240605_Dicanthelium_clandestinum_PoreC.pairs.cool INFO:cooler.create:Creating cooler at "20240605_Dicanthelium_clandestinum_PoreC.pairs.cool::/" INFO:cooler.create:Writing chroms INFO:cooler.create:Writing bins INFO:cooler.create:Writing pixels INFO:cooler.reduce:nnzs: [0] INFO:cooler.reduce:current: [0] Traceback (most recent call last): File "/home/epi2melabs/conda/bin/cooler", line 10, in sys.exit(cli()) File "/home/epi2melabs/conda/lib/python3.8/site-packages/click/core.py", line 1157, in call return self.main(args, kwargs) File "/home/epi2melabs/conda/lib/python3.8/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) File "/home/epi2melabs/conda/lib/python3.8/site-packages/click/core.py", line 1688, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/epi2melabs/conda/lib/python3.8/site-packages/click/core.py", line 1688, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/epi2melabs/conda/lib/python3.8/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "/home/epi2melabs/conda/lib/python3.8/site-packages/click/core.py", line 783, in invoke return __callback(args, kwargs) File "/home/epi2melabs/conda/lib/python3.8/site-packages/cooler/cli/cload.py", line 584, in pairs create_cooler( File "/home/epi2melabs/conda/lib/python3.8/site-packages/cooler/create/_create.py", line 1038, in create_cooler create_from_unordered( File "/home/epi2melabs/conda/lib/python3.8/site-packages/cooler/create/_create.py", line 763, in create_from_unordered create(cool_uri, bins, chunks, columns=columns, dtypes=dtypes, mode=mode, kwargs) File "/home/epi2melabs/conda/lib/python3.8/site-packages/cooler/create/_create.py", line 641, in create nnz, ncontacts = write_pixels( File "/home/epi2melabs/conda/lib/python3.8/site-packages/cooler/create/_create.py", line 211, in write_pixels for i, chunk in enumerate(iterable): File "/home/epi2melabs/conda/lib/python3.8/site-packages/cooler/reduce.py", line 151, in iter combined = pd.concat( File "/home/epi2melabs/conda/lib/python3.8/site-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) File "/home/epi2melabs/conda/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 368, in concat op = _Concatenator( File "/home/epi2melabs/conda/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 425, in init raise ValueError("No objects to concatenate") ValueError: No objects to concatenate

Work dir: /work/users/c/a/caroe/Bean_assembly_pipeline/Pore_C/work/a5/84799084b452c56c4c93fbaac61dd6

Tip: you can replicate the issue by changing to the process work dir and entering the command bash .command.run

-- Check '.nextflow.log' file for details WARN: Killing running tasks (8)

webgbi commented 3 weeks ago

Hi, thanks for reporting this. Looks like the process runs out of memory and it is set fairly low. Out of interest how large is your input file? You could try adding to the config and resume the workflow

process {
    withName: 'pairsToCooler' {
      memory = 16.GB
    }
}

Hi Sarah,

Thank you so much for reply. May I ask how to add those line to the config? Nextflow and the pore-c pipeline are new for me