theislab / scib-reproducibility

Additional code and analysis from the single-cell integration benchmarking project
https://theislab.github.io/scib-reproducibility/
MIT License
52 stars 14 forks source link

Feature request: Dynamic results table #20

Closed wmacnair closed 2 years ago

wmacnair commented 2 years ago

Hi all

Thanks for this great work. Seems like it's turning into a model example for benchmarking in single cell!

One thing I was hoping to find but haven't yet (unless I've missed it?), is a way to make the results more tailored to individual users. This would be extremely simple to implement, and could include for example:

To me, it seems a bit of a shame to have this rich benchmarking data available, but then only allow one inflexible overall ranking. It would be great to allow users to explore which method is best for their particular circumstances.

Thanks again for this extensive work, Will

lazappi commented 2 years ago

The website is just a simple static R Markdown website at the moment so I'm not sure how easy it would be to add these features (happy to hear suggestions though).

I'm also not entirely sure if we want to do that here or not. We designed this as a way to present the results from rather than a general resource. This is now being integrated as a task in https://github.com/openproblems-bio/openproblems and I think this kind of custom filtering is definitely something that should be considered there. @LuckyMD what do you think?

If you want to do something custom yourself the raw metrics file is here https://github.com/theislab/scib-reproducibility/blob/main/data/benchmarks.csv but it takes a bit of processing to get it into the format used for the rankings.

LuckyMD commented 2 years ago

Hi Will,

Thanks for the compliments. Those are very good points that we had not thought about adding for the initial website.

As luke mentioned, the website is static and thus redoing the ranking might be out of scope here. We should definitely take a look at this for the open problems website though. In the first iteration we weren't looking into overall ranking by metric aggregation at all actually. This would be a next step for how to look at living benchmarks though. Would you consider posting an issue to the open problems github that Luke linked above?

wmacnair commented 2 years ago

Hi both

Regarding the location of a dynamic ranking feature, it should go wherever it fits best :) It would just be great if it could exist somewhere.

I will post on the open problems github.

Thanks Will

wmacnair commented 2 years ago

Done 👍

As I was proposing the issue, I saw your options to make triaging issues easier ("Propose new metric", "Propose new dataset" etc).

Mine didn't quite fit into that, so I wondered whether an additional option, something like "Meta: Issue regarding structure of benchmarks" could be helpful. Or whatever title would best fit to that kind of discussion.

LuckyMD commented 2 years ago

yes, thanks :). Should add this too :).