TNO / harlow

A Python package for adaptive sampling and surrogate modelling.
https://TNO.github.io/harlow
MIT License
4 stars 1 forks source link

check and improve evaluate function #16

Open merijnpepijndebakker opened 1 year ago

merijnpepijndebakker commented 1 year ago

It seems that the change in the evaluate function in line 48 of harlow/utils/helper_functions.py leads to errors in the pipeline. Previously the evaluate function would return zero if no test points or metric was provided. Now it will instead throw an error.

Since there may be situations where one would want to run a sampler without providing test points, I think it would be preferable to refine this part. One possible solution would be to consider the different cases:

No metric and no test points -> Skip evaluation of metric Metric without test points -> Evaluate the metric and throw error if that metric requires test points (assuming there are metrics we have not implemented that do not need test points, e.g. metrics based on the variance of the prediction for GP surrogates). Test points without metric -> Error Test points and metric -> Evaluate metric for test points