STAR --alignEndsType EndToEnd \
--genomeDir star_2_7_gencode40_sjdb/ \
--genomeLoad NoSharedMemory \
--outBAMcompression 10 \
--outFileNamePrefix 4111_IP_1.genome. \
--winAnchorMultimapNmax 100 \
--outFilterMultimapNmax 100 \
--outFilterMultimapScoreRange 1 \
--outSAMmultNmax 1 \
--outMultimapperOrder Random \
--outFilterScoreMin 10 \
--outFilterType BySJout \
--limitOutSJcollapsed 5000000 \
--outReadsUnmapped None \
--outSAMattrRGline ID:4111_IP_1 \
--outSAMattributes All \
--outSAMmode Full \
--outSAMtype BAM Unsorted \
--outSAMunmapped Within \
--readFilesCommand zcat \
--outStd Log \
--readFilesIn 4111_IP_1.trimmed.umi.fq.gz \
--runMode alignReads \
--runThreadN 8
STAR --alignEndsType EndToEnd --genomeDir /projects/ps-yeolab4/software/eclip/0.7.1/examples/inputs/star_2_7_gencode40_sjdb/ --genomeLoad NoSharedMemory --outBAMcompression 10 --outFileNamePrefix output/bams/raw/genome/4111_IP_1.genome. --winAnchorMultimapNmax 100 --outFilterMultimapNmax 100 --outFilterMultimapScoreRange 1 --outSAMmultNmax 1 --outMultimapperOrder Random --outFilterScoreMin 10 --outFilterType BySJout --limitOutSJcollapsed 5000000 --outReadsUnmapped None --outSAMattrRGline ID:4111_IP_1 --outSAMattributes All --outSAMmode Full --outSAMtype BAM Unsorted --outSAMunmapped Within --readFilesCommand zcat --outStd Log --readFilesIn output/fastqs/umi/4111_IP_1.trimmed.umi.fq.gz --runMode alignReads --runThreadN 8
STAR version: 2.7.10a_alpha_220314 compiled: 2022-05-01T19:55:31-0700 tscc-1-32.sdsc.edu:/projects/ps-yeolab3/eboyle/software/STAR/source
Dec 20 20:41:04 ..... started STAR run
Dec 20 20:41:04 ..... loading genome
Dec 20 20:42:08 ..... started mapping
EXITING because of fatal error: buffer size for SJ output is too small
Solution: increase input parameter --limitOutSJcollapsed
Dec 20 20:49:07 ...... FATAL ERROR, exiting
Presumably the solution is to increase limitOutSJcollapsed but will close this issue once I know that doubling it is sufficient for these larger datasets
Presumably the solution is to increase
limitOutSJcollapsed
but will close this issue once I know that doubling it is sufficient for these larger datasets