Open karalets opened 4 years ago
This sounds good to me. I guess one other basic question is whether we want to have metrics for how "well-calibrated" the predictive uncertainties are, and if so, what those should look like. If this is in-scope, perhaps @karalets can provide some references / pointers?
This sounds good to me. I guess one other basic question is whether we want to have metrics for how "well-calibrated" the predictive uncertainties are, and if so, what those should look like. If this is in-scope, perhaps @karalets can provide some references / pointers?
Great point, I am happy to take point on that with some references once we start having results to discuss how to evaluate such things.
The general idea is the following: We need in distribution and out-of-distribution data sets for training in order to evaluate things.
So once we have some experiments lined up where molecules for both categories are chosen well and start plotting stuff we can discuss that subtlety.
But I would still like it if the chemists here could add some more informative and concrete metrics with relation to usefulness here. Examples are the things the Cambridge-group paper evaluates.
Personally I'd imagine real chemists care about false positive rates with a cutoff and so on. But at this stage let's just suppose they're no different.
I thought more about downstream quantities of interest as metrics. No clue what people care about.
Initially, we can have things like log-likelihood in order to just be able to get some reasonable quantitative thing.
Over time, however, we may want to have more informative metrics for performance of the deep net on the task at hand, for instance downstream metrics for a chemistry application or so.
While this is not pressing to do at first, I am opening this issue so we can collect ideas for:
Both of those can and should also take into account the evaluation chosen in https://pubs.rsc.org/en/content/articlepdf/2019/sc/c9sc00616h as ultimately we will need to compare to it.
My first pitch is as stated:
The nice thing about this is we can rerun the same evaluation protocols with any metric, not just LLK.