epiforecasts / scoringutils

Utilities for Scoring and Assessing Predictions
https://epiforecasts.io/scoringutils/
Other
48 stars 21 forks source link

Discussion: Rethink `...` in `score() #431

Closed nikosbosse closed 7 months ago

nikosbosse commented 1 year ago

Current, score accepts additional arguments through ....

An alternative option would be to have users pass in additional arguments as part of the function definitions, like we do here:

metrics_quantile <- list(
  "wis" = wis,
  "overprediction" = overprediction,
  "underprediction" = underprediction,
  "dispersion" = dispersion,
  "bias" = bias_quantile,
  "coverage_50" = \(...) {do.call(interval_coverage_quantile, c(list(...), range = 50))},
  "coverage_90" = \(...) {do.call(interval_coverage_quantile, c(list(...), range = 90))},
  "coverage_deviation" = interval_coverage_deviation_quantile,
  "ae_median" = ae_median_quantile
)

for the coverage. This is maybe cleaner, although allowing ... in score() is probably more convenient for users (+ they can also go the second route and pass it as an argument to the function directly in the metrics list).

seabbs commented 1 year ago

Personally I prefer the idea of making setting up the metric configurations really easy for users vs them interacting with them in score as it feels a bit opaque and hacky. This would also enable us to do things like enable testing of metric configurations to check we think they should work against example data etc. (i.e a check_metrics type function).

seabbs commented 7 months ago

Where are we on this? Currently we are still doing ... passing or no?

nikosbosse commented 7 months ago

Still doing ... at the moment. To me, it seems like changing that would be a major task (we'd have to think carefully about how the new system looks like, come up with a workflow etc.). As of now, I don't have a clear idea of what the new system would look like. We could try and spec something out?

My hunch is that doing this would potentially significantly delay the CRAN release. On the other hand, I appreciate it's not ideal to have another set of potentially breaking changes coming up in the future.

seabbs commented 7 months ago

I'm worried its going to cause a lot of issues for users as if you have clashing arguments its a real pain. Given how simple the combined workflow is and that the current ... doesn't really have much documentation it doesn't seem like this would be that much of a lift?

We could try and spec something out?

Isn't this already specc'd out in the comment that opens this issue?

(...) {do.call(interval_coverage_quantile, c(list(...), range = 50))}

The only additional thing we might consider is making a little helper to do this for people? i.e. customise_metric(metric, setting)

seabbs commented 7 months ago

I just looked through the package docs and I think there are currently no examples of passing anything other than a custom metric list to score.

seabbs commented 7 months ago

In fact I can't even find an example of that being passed (i.e nothing here: https://epiforecasts.io/scoringutils/dev/reference/select_metrics.html?q=select#null) which suggests it won't be hard to rework?