I got this error message regarding the first pair (RED3_EYE_1):
EXITING because of FATAL ERROR: number of bytes expected from the BAM bin does not agree with the actual size on disk: Expected bin size=245997338 ; size on disk=0 ; bin number=47
This is the tail of the RED3_EYE_1_L2_result.gzLog.out file:
Created thread # 8
Created thread # 9
Created thread # 10
Created thread # 11
Starting to map file # 0
mate 1: RED3/RED3_EYE_1_EKRN230048993-1A_HMTYJDSX7_L3_1.fq.gz
mate 2: RED3/RED3_EYE_1_EKRN230048993-1A_HMTYJDSX7_L3_2.fq.gz
BAM sorting: 147564 mapped reads
BAM sorting bins genomic start loci:
1 0 2813034
But also, it seems to have continued onto the second pair of files even though it encountered a fatal error? At least I think so because it made some output files/folders for the second pair, and these are absent for the rest of the pairs of fastq files in this folder.
From looking at other questions/discussions, it seems that I should increase the memory request (my genome is smaller than mammals so i thought the upper limit for mammal genomes would suffice) and/or change the bam output to unsorted. Each fastq file is about 20-60 million reads and the genome size is 167964029 bytes for this species.
As a side note, I would like to speed up my job if possible, as my HPC has a limit of 7 days, so any tips would be appreciated!!
Hi, I am mapping several transcriptome samples to the same genome using a batch script on slurm:
the first two pairs of files:
I got this error message regarding the first pair (RED3_EYE_1):
EXITING because of FATAL ERROR: number of bytes expected from the BAM bin does not agree with the actual size on disk: Expected bin size=245997338 ; size on disk=0 ; bin number=47
This is the tail of the RED3_EYE_1_L2_result.gzLog.out file:But also, it seems to have continued onto the second pair of files even though it encountered a fatal error? At least I think so because it made some output files/folders for the second pair, and these are absent for the rest of the pairs of fastq files in this folder.
From looking at other questions/discussions, it seems that I should increase the memory request (my genome is smaller than mammals so i thought the upper limit for mammal genomes would suffice) and/or change the bam output to unsorted. Each fastq file is about 20-60 million reads and the genome size is 167964029 bytes for this species.
As a side note, I would like to speed up my job if possible, as my HPC has a limit of 7 days, so any tips would be appreciated!!
Thanks for your help!