Open tomsing1 opened 8 years ago
I have thought about that a bit. The annoying part is how to practically do this.
Say we allow hamming distance of 1 as error. One way of implementing this would be to upon observation of a (UMI, tag) pair check for already observed pairs if there is one there where the UMI only differ by 1. Now in stead of doing exact lookup in the hash table as when you do counting unique's, you have to go through the entire observed data every time. This would need some more thought out data structure than is currently there. Additionally, this approach is not so good, because imagine you have UMIs A, B, and C. And say d(A, B) = 1, d(B, C) = 1, but d(A, C) = 2. If we observe A first, C will be counted as a new UMI, but not B. If B is observed first, neither A nor C will be counted as new UMIs.
The more correct way to do this is probably to make a clustering of UMIs in the collapsing stage at the end. Now for every transcript, you need to cluster UMIs and merge the ones which are within a "ball" in hamming distance. Then in stead of counting unique UMIs for a transcript, you count the number of distinct UMI-balls. But what if balls overlap? How computationally expensive does this become?
These are questions which made me not want to put too much effort in to it.
Testing the different strategies (without simulation which the blog post does) would be pretty straight forward using a dataset with spike-ins.
That said, I'm getting very good performance with the "unique" method based on spike-ins. Which also doesn't motivate me too much to try to implement more robust counting as outlined in the blog post.
You can output the entire "evidence table" from the tallying procedure if you want to try some different UMI-merging approaches (use the --output_evidence_table
option). If you find one that is computationally reasonable and improves the result, we should definitely make it part of the merging at the end! If it is not computationally reasonable we could maybe still add it as an option.
Sorry for the long response, let me know what your thoughts are on this reasoning.
Thanks a lot for your detailed response. Yes, I agree, this is not a trivial problem. It seems that the authors of the blog post (or some of their colleagues at the same institute) have already implemented code that implements the different options of error correction they describe in the post: https://github.com/CGATOxford/UMI-tools
They use some extra code for logging / IO from another of their packages, but other than that the code is pretty much standalone. Perhaps there is an opportunity to combine their work with yours? Just a thought...
I was wondering if you had considered merging UMIs that might be erroneous copies, eg as outlined in this blog post.
If I understand the current code correctly, it considers two barcodes as separte UMIs if they differ even by one base. Would it be useful to merge reads into the same UMI if they are 'nearly' identical, e.g. based on Hamming distance?