liulab-dfci / MAESTRO

Single-cell Transcriptome and Regulome Analysis Pipeline
GNU General Public License v3.0
278 stars 78 forks source link

Chromap Error with MAESTRO 1.5.0 #143

Open LunaP-831 opened 3 years ago

LunaP-831 commented 3 years ago

Hello, thank you for developing MAESTRO!

I am someone that is fairly new to ATAC-seq analysis. I am trying to use MAESTRO for 10X scATAC-seq using the latest version and MAESTRO has its own conda environment.

My first problem was building an index for chromap. When I tried to build an index with : chromap -i -r genome.fa -o GRCh38_chromap.index

I get the following: Illegal instruction (core dumped)

Then I've decided to conda install chromap with conda install -c bioconda chromap with this I was able to build the index for chromap so, I have followed the pipeline. But as I was running the snakemake pipeline, I got the following error:

[Thu Jul 8 16:51:41 2021] Error in rule scatac_chromap: jobid: 4 output: Result/Mapping/scATAC_human/fragments_pre_corrected_dedup_count.tsv shell: chromap --preset atac -x /mnt/refdata/MAESTRO_references/scATAC_references/chromap/GRCh38_chromap.index -r /mnt/Lab_A/Refdata_scATAC_MAESTRO_GRCh38_1.1.0/GRCh38_genome.fa -1 Result/Tmp/scATAC_human/scATAC_human_R1.fastq -2 Result/Tmp/scATAC_human/scATAC_human_R3.fastq -o Result/Mapping/scATAC_human/fragments_pre_corrected_dedup_count.tsv -b Result/Tmp/scATAC_human/scATAC_human_R2.fastq -t 8 --barcode-whitelist /mnt/Lab_A/scATAC_cellranger_barcodes/737K-cratac-v1.txt (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)

Removing output files of failed job scatac_chromap since they might be corrupted: Result/Mapping/scATAC_human/fragments_pre_corrected_dedup_count.tsv Shutting down, this might take some time.

Could you help me with how to get through this?

Thank you!

baigal628 commented 3 years ago

Hi!

I suspect this error is caused by the limited cores and memory available on your working node. Did you use nohup to run the snakemake? If that is the case, could you try running the snakemake command by submitting a slurm job and allocate enough cores and memory? I normally use #SBATCH -c 32 and #SBATCH --mem=80G.

LunaP-831 commented 3 years ago

Hello Gali,

Yes, I have used nohup to run the snakemake. On our cluster we don't use slurm but I did try to allocate enough cores and memory. Nothing changed. Best