Open LachlanStuart opened 3 years ago
This has been shelved as we found it didn't provide a compelling argument for adoption in its current state. (~15% of good-quality and ~100% of bad-quality datasets got worse results, overall improvement usually less than Raphaël La Rocca's implementation). We decided that it should be re-evaluated with METASPACE ML FDR (#797) once that's available, as the inclusion of mass error in the MSM score is likely to provide a clearer advantage.
My notes, if/when this gets picked up again:
evaluation.py
need to be re-tooled to plot mass errors (& stddevs) on a ppm scale, and track peaks through the pipeline by peak index instead of m/z so that a reliable before & after view can be shown.