I am currently using STAR version 2.7.2a with STARsolo to analyze 10x Genomics single-cell RNA-seq data. I have a question regarding the parameter -outFilterMismatchNoverReadLmax and its default value of 1.0.
According to the documentation, my understanding is that setting this parameter to 1.0 allows 100% mismatches per read length. However, this interpretation seems incorrect, as allowing 100% mismatches would mean a read could align to any location in the genome. My results also indicate that the software does not operate as if 100% mismatches are allowed.
Could someone please clarify what happens when -outFilterMismatchNoverReadLmax is set to 1.0? Specifically, I would like to understand the significance of this parameter setting and how the software processes the reads under this configuration to achieve higher alignment rates.
Hello,
I am currently using STAR version 2.7.2a with STARsolo to analyze 10x Genomics single-cell RNA-seq data. I have a question regarding the parameter -outFilterMismatchNoverReadLmax and its default value of 1.0.
According to the documentation, my understanding is that setting this parameter to 1.0 allows 100% mismatches per read length. However, this interpretation seems incorrect, as allowing 100% mismatches would mean a read could align to any location in the genome. My results also indicate that the software does not operate as if 100% mismatches are allowed.
Could someone please clarify what happens when -outFilterMismatchNoverReadLmax is set to 1.0? Specifically, I would like to understand the significance of this parameter setting and how the software processes the reads under this configuration to achieve higher alignment rates.
Thank you for your assistance.