Closed sanhe374 closed 9 months ago
Hi,
What is the error message prior to the job being killed? Is it being killed due to an error thrown by TEcount, or from the system (e.g. out of memory). For files that big, you will probably need to provide more memory and running time (if you're using a cluster system like SLURM).
Thanks.
I am running TEcount on some RNA-seq data. It seems to work well for most of my files which have around 100M reads (BAM file around 20 GB). However, I have one sample with more than 700 million reads (BAM file 143 GB) and there TEcount got killed after a bit more than 12 hours.
I have been using the --outFilterMultimapNmax 500 and winAnchorMultiplemapNmax 500 as recommended by the authors of another TE quantification tool (Telescope, M Bendall).
Is there any way that I can get TEcount to work on such a large bam file?