Open averagehat opened 9 years ago
There may be cases where you have a bam that contains reads from multiple platforms such as:
So, in these scenarios I suspect that everything would still work out as the algorithm "should" pick the PacBio and Sanger reads first as they would be longer
I'm not sure if we really need to worry about the effect on variant calling as we would make an assertion that data has already been quality filtered so all bases are equal-ish.
As stated in the TODO section, the current sampling method falls victim to picking additional alignments when the minimum depth has already been reached at that reference index. i.e.
if your average read depth is 150 and you are sub selecting at 10 depth, the first base will contain 10 depth, but position 2 will contain those 10 reads, plus potentially 10 more reads. Then the 3rd position will contain 10+10+10, and so on.
These reads can stack the entire length of the position 1 sequence, causing the upper depth-bound of the random-subsample solution to be
min_depth * len(read)
.An iterative solution which only picks read when needed (and backtracks if necessary) can have an upper bound of
2 * min_depth
(some overlap may be necessary in a case like:)insert diagram
. We avoid a higher depth by always picking the read with the highest overlap index (POS + len(sequence). This allows the user to more precisely define the wanted depth.NOTES: