These values were not significantly different between GC corrected and uncorrected (they were on the same order of magnitude). However, when I ran segment using these training files, I did not get drastically different results. That makes me suspicious of the LLR behavior, but this would probably be better addressed by Robin I imagine.
In GitLab by @magr0763 on Sep 28, 2018, 09:42
I performed different iterations of FStitch train using the following bedGraph formats GC uncorrected and corrected on a single sample:
1) bedtools standard out (bt) 2) bedtools 5' (bt5prime) 3) deepTools (dt)
I got markedly different final LLR values for each ranging from the 10^3 to 10^5 as shown in these .hmminfo files:
bt5prime.hmminfo
bt.hmminfo
dt.hmminfo
These values were not significantly different between GC corrected and uncorrected (they were on the same order of magnitude). However, when I ran segment using these training files, I did not get drastically different results. That makes me suspicious of the LLR behavior, but this would probably be better addressed by Robin I imagine.