Open warnbergg opened 3 years ago
Maybe add a squared=True
parameter?
That should keep things fairly close with how it's handled elsewhere, e.g. sklearn.metrics.mean_squared_error
@hayesall Yes, that is maybe a better option. I'll create a PR for that!
Describe the solution you'd like
In #780 the macro-averaged mean absolute error was proposed as a metric to the library. Using the same rationale as for that feature, I suggest that also the macro-averaged mean squared error (MAMSE) is added to the library. That way we penalize errors that are further from the ground truth more harshly.
Is this feature something that could be of interest to the greater public?