epi2me-labs / wf-single-cell

Other
74 stars 39 forks source link

Error during minimap step #54

Closed Josephinedh closed 6 months ago

Josephinedh commented 1 year ago

Operating System

CentOS 7

Other Linux

No response

Workflow Version

23.04.2

Workflow Execution

Command line

EPI2ME Version

No response

CLI command run

${nf} run ${base_dir}/wf-single-cell \ -w ${base_dir}/output_GBM001_full/workspace \ -profile singularity \ --fastq /home/projects/cu_10027/data/projects/gbm/data/data_processed/rigshospitalet/rna/ont/samples/GBM001/PAQ11402_pass_f182e1fc_ddede2b4.fastq.gz \ --kit_name 3prime \ --kit_version v3 \ --expected_cells 10000 \ --ref_genome_dir /home/projects/cu_10027/data/genomes/cellranger/refdata-gex-GRCh38-2020-A \ --out_dir ${base_dir}/output_GBM001_full \ --plot_umaps \ -resume \ -c ${cfg}

Workflow Execution - CLI Execution Profile

singularity

What happened?

The pipeline failed during the minimap step.

Relevant log output

Error executing process > 'pipeline:align:align_to_ref (1)'

Caused by:
  Process `pipeline:align:align_to_ref (1)` terminated with an error exit status (1)

Command executed:

  minimap2 -ax splice -uf --secondary=no --MD -t 4       --junc-bed ref_genes.bed -I 16G        ref_genome.fasta reads.fastq*         | samtools view -b --no-PG -t ref_chrom_sizes -         | samtools sort -@ 2 --no-PG  - > "PAQ11402_pass_f182e1fc_ddede2b4_sorted.bam"
  samtools index -@ 4 "PAQ11402_pass_f182e1fc_ddede2b4_sorted.bam"

Command exit status:
  1

Command output:
  (empty)

Command error:
  [M::worker_pipeline::65380.921*3.94] mapped 455237 sequences
  [M::worker_pipeline::65720.398*3.94] mapped 455541 sequences
  [M::worker_pipeline::66083.113*3.94] mapped 456284 sequences
  [M::worker_pipeline::66421.937*3.94] mapped 458651 sequences
  [M::worker_pipeline::66754.597*3.94] mapped 459542 sequences
  [M::worker_pipeline::67117.429*3.94] mapped 458990 sequences
  [M::worker_pipeline::67449.273*3.94] mapped 459120 sequences
  [M::worker_pipeline::67812.902*3.94] mapped 459063 sequences
  [M::worker_pipeline::68162.692*3.94] mapped 459455 sequences
  [M::worker_pipeline::68495.429*3.94] mapped 459544 sequences
  [M::worker_pipeline::68861.668*3.94] mapped 458659 sequences
  [M::worker_pipeline::69198.908*3.94] mapped 458669 sequences
  [M::worker_pipeline::69560.494*3.94] mapped 459725 sequences
  [M::worker_pipeline::69907.224*3.94] mapped 459662 sequences
  [M::worker_pipeline::70241.970*3.94] mapped 459771 sequences
  [M::worker_pipeline::70594.509*3.94] mapped 458678 sequences
  [M::worker_pipeline::70926.377*3.94] mapped 458731 sequences
  [M::worker_pipeline::71286.052*3.94] mapped 458955 sequences
  [M::worker_pipeline::71631.375*3.95] mapped 459650 sequences
  [M::worker_pipeline::71969.312*3.95] mapped 459782 sequences
  [M::worker_pipeline::72326.091*3.95] mapped 459861 sequences
  [M::worker_pipeline::72660.381*3.95] mapped 464258 sequences
  [M::worker_pipeline::73018.816*3.95] mapped 464275 sequences
  [M::worker_pipeline::73364.416*3.95] mapped 464167 sequences
  [M::worker_pipeline::73693.312*3.95] mapped 462705 sequences
  [M::worker_pipeline::74054.613*3.95] mapped 462717 sequences
  [M::worker_pipeline::74384.219*3.95] mapped 461745 sequences
  [M::worker_pipeline::74741.918*3.95] mapped 461803 sequences
  [M::worker_pipeline::75095.496*3.95] mapped 461885 sequences
  [M::worker_pipeline::75433.238*3.95] mapped 461130 sequences
  [M::worker_pipeline::75796.830*3.95] mapped 460922 sequences
  [M::worker_pipeline::76130.676*3.95] mapped 460910 sequences
  [M::worker_pipeline::76484.566*3.95] mapped 461008 sequences
  [M::worker_pipeline::76841.610*3.95] mapped 459769 sequences
  [M::worker_pipeline::77173.512*3.95] mapped 459604 sequences
  [M::worker_pipeline::77532.912*3.95] mapped 459075 sequences
  [M::worker_pipeline::77862.053*3.95] mapped 459030 sequences
  .command.run: line 31: /dev/fd/62: No such file or directory
  Error, do this: mount -t proc proc /proc
  Error, do this: mount -t proc proc /proc
  [M::worker_pipeline::78863.205*3.94] mapped 457839 sequences
  [M::worker_pipeline::78920.880*3.94] mapped 458115 sequences
  [M::worker_pipeline::79050.887*3.94] mapped 201987 sequences
  [M::main] Version: 2.24-r1122
  [M::main] CMD: minimap2 -ax splice -uf --secondary=no --MD -t 4 --junc-bed ref_genes.bed -I 16G ref_genome.fasta reads.fastq1 reads.fastq2 reads.fastq3 reads.fastq4
  [M::main] Real time: 79051.359 sec; CPU: 311319.487 sec; Peak RSS: 20.101 GB
  [bam_sort_core] merging from 5 files and 2 in-memory blocks...
  samtools: error while loading shared libraries: libhts.so.3: cannot open shared object file: No such file or directory
  .command.run: line 155: kill: (35) - No such process
  INFO:    Cleaning up image...

Application activity log entry

No response

nrhorner commented 1 year ago

Hi @Josephinedh

It's possible that this a an intermittent filesystem problem, but I'm not sure. Could you try running again, and if you get the same issue, could you also post the .nextflow.log please

Josephinedh commented 1 year ago

Hi Neil I've tried to rerun it, also using your suggestion in issue #47 as I previously got an error similar to that. However, I still get the error above, I've attached the .nextflow.log here as well. Thanks, Josephine nextflow.log.txt

nrhorner commented 1 year ago

Hi @Josephinedh

Is it possible you are using an incorrect container? Is there an entry for common_sha in /home/projects/cu_10027/data/projects/gbm/data/data_processed/rigshospitalet/rna/ont/wf-single-cell.config, and if so what is it?

Also try using the latest version v0.2.9

nrhorner commented 1 year ago

Actually I can see in the logs you are using the correct container: wf-single-cell:sha8e7d91013029ea8721743bd087583e5205cdc1dc

Please do try v0.2.9 though, and I will get straight back to you if this does not work

Josephinedh commented 1 year ago

Thanks for the help on this. I tried now with v0.2.9 and still get the same error, unfortunately. And no, there's no entry for common_sha in the config file.

nrhorner commented 1 year ago

Hi @Josephinedh. Could you post the log from the last run you did with v0.2.9 please

Josephinedh commented 12 months ago

Sure it's here:

newversion_log.txt

nrhorner commented 11 months ago

Hi @Josephinedh

Apologies for the late response. I'm looking into this now

nrhorner commented 9 months ago

Hi @Josephinedh

Again sorry for the late response. This is a puzzle, as we know samtools is installed in the container and we haven't come across this issue before.

Could you try the following please? 1: delete the image /home/projects/cu_10027/data/projects/gbm/data/data_processed/rigshospitalet/rna/ont/cache/ontresearch-wf-single-cell-sha8e7d91013029ea8721743bd087583e5205cdc1dc.img

2: try running the workflow again, but use the latest 1.0.1 with -r v1.0.1

nrhorner commented 8 months ago

@Josephinedh Did you try my suggestion?

Josephinedh commented 6 months ago

Hi @nrhorner sorry about the very late response. I've tried with your newest version now, and for the first sample it worked. However, when I tried with a second sample the pipeline failed but at one of the last output steps, so most output files are already generated.

I've attached the log file here. nextflow_v1.1.0.log

cjw85 commented 6 months ago

The relevant part of you log is:

Caused by:
  Process `output (4)` terminated with an error exit status (255)

Command executed:

  echo "Writing output files"

Command exit status:
  255

Command output:
  (empty)

Command error:
  INFO:    Mounting image with FUSE.
  WARNING: underlay of /etc/localtime required more than 50 (78) bind mounts
  /usr/bin/fusermount3: entry for /tmp/rootfs-3060645730/root not found in /etc/mtab
  FATAL:   While running host post start tasks: while unmounting fuse directory: /tmp/rootfs-3060645730/root: exit status 1
  /bin/bash: line 1: /bin/bash: No such file or directory
  FATAL:   host post start process failed: host post start tasks failed

Work dir:

I'm not entirely sure what has happened here but it appears to be a issue with singularity.

By the way, today we released version 2.0.0, though it does not contain anything that would naturally fix the issue observed above.

cjw85 commented 6 months ago

Closing this issue as the original issues has been solved. Please open a new issue if you encounter further problems.