Open Eugloh opened 5 years ago
I see that this comment is rather old. Nevertheless, I just had the same error. It is because the format of the samtools sort command format is from an older version. Either downgrade your samtools to around version 0.1.18 or modify Map.py.
Hello,
I'm doing an RNA-seq analysis on 80 files taken from CARNES 2015 paper. At the mapping phase, 4 files fail, with the following error message:
Script Arguments
fetch_folder=squire_fetch name=False extra=None read_length=115 verbosity=True pthreads=4 read1=/beegfs/data/eugloh/CARNES/bug_files/B1_W1_F2_TCGAAG_L002_fp.fastq read2=None build=dm6 func=<function main at 0x7f1c979f7a28> trim3=0 map_folder=squire_map_115
Aligning FastQ files 2019-04-24 16:04:36.165775
Apr 24 16:04:36 ..... started STAR run Apr 24 16:04:36 ..... loading genome Apr 24 16:06:16 ..... processing annotations GTF Apr 24 16:06:23 ..... inserting junctions into the genome indices Apr 24 16:07:26 ..... started 1st pass mapping Apr 24 16:19:59 ..... finished 1st pass mapping Apr 24 16:19:59 ..... inserting junctions into the genome indices Apr 24 16:20:32 ..... started mapping Apr 24 16:35:10 ..... finished successfully [bam_sort_core] merging from 32 files... Aborted Traceback (most recent call last): File "/beegfs/home/eugloh/miniconda3/envs/squire/bin/squire", line 11, in
load_entry_point('SQuIRE', 'console_scripts', 'squire')()
File "/beegfs/data/eugloh/SQuIRE/squire/cli.py", line 156, in main
subargs.func(args = subargs)
File "/beegfs/data/eugloh/SQuIRE/squire/Map.py", line 378, in main
align_unpaired(read1,pthreads,trim3,index,outfile,gtf,gzip,prefix, read_length,extra_fapath)
File "/beegfs/data/eugloh/SQuIRE/squire/Map.py", line 180, in align_unpaired
sp.check_call(["/bin/sh", "-c", sortcommand])
File "/beegfs/home/eugloh/miniconda3/envs/squire/lib/python2.7/subprocess.py", line 186, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/bin/sh', '-c', 'samtools sort -@ 4 squire_map_115/B1_W1_F2_TCGAAG_L002_fpAligned.out.bam squire_map_115/B1_W1_F2_TCGAAG_L002_fp']' returned non-zero exit status 134
I work on single-end data and reads of length 115. I don't believe the error is due to a RAM issue because I work on a powerful enough processing cluster. Those 4 files are not particularly larger than the other 76, and I can't seem to find the root of the problem.
Have you ever encountered this issue, and if so, what was the cause, and how to fix it? I'm stuck here and would really appreciate your help.
Thanks in advance.
Eugenie