choderalab / pinot

Probabilistic Inference for NOvel Therapeutics
MIT License
15 stars 2 forks source link

What Metrics do we want to have for training/testing here? #4

Open karalets opened 4 years ago

karalets commented 4 years ago

Initially, we can have things like log-likelihood in order to just be able to get some reasonable quantitative thing.

Over time, however, we may want to have more informative metrics for performance of the deep net on the task at hand, for instance downstream metrics for a chemistry application or so.

While this is not pressing to do at first, I am opening this issue so we can collect ideas for:

  1. metrics that make sense to collect for training and testing
  2. plots we may want to see down the line that would be reasonable to have

Both of those can and should also take into account the evaluation chosen in https://pubs.rsc.org/en/content/articlepdf/2019/sc/c9sc00616h as ultimately we will need to compare to it.

My first pitch is as stated:

The nice thing about this is we can rerun the same evaluation protocols with any metric, not just LLK.

maxentile commented 4 years ago

This sounds good to me. I guess one other basic question is whether we want to have metrics for how "well-calibrated" the predictive uncertainties are, and if so, what those should look like. If this is in-scope, perhaps @karalets can provide some references / pointers?

karalets commented 4 years ago

This sounds good to me. I guess one other basic question is whether we want to have metrics for how "well-calibrated" the predictive uncertainties are, and if so, what those should look like. If this is in-scope, perhaps @karalets can provide some references / pointers?

Great point, I am happy to take point on that with some references once we start having results to discuss how to evaluate such things.

The general idea is the following: We need in distribution and out-of-distribution data sets for training in order to evaluate things.

So once we have some experiments lined up where molecules for both categories are chosen well and start plotting stuff we can discuss that subtlety.

karalets commented 4 years ago

But I would still like it if the chemists here could add some more informative and concrete metrics with relation to usefulness here. Examples are the things the Cambridge-group paper evaluates.

yuanqing-wang commented 4 years ago

Personally I'd imagine real chemists care about false positive rates with a cutoff and so on. But at this stage let's just suppose they're no different.

karalets commented 4 years ago

I thought more about downstream quantities of interest as metrics. No clue what people care about.