Open TonyBagnall opened 1 month ago
Should predict_proba
actually return 0/1 if it is not supported?
To make sure that the user notices that the classifier does not compute probabilities, we should at least raise a warning - I would even raise an error.
I'm in favor of introducing a new tag for the tests, anyway. I would name it like the method capability:predict_proba
Should
predict_proba
actually return 0/1 if it is not supported? To make sure that the user notices that the classifier does not compute probabilities, we should at least raise a warning - I would even raise an error.I'm in favor of introducing a new tag for the tests, anyway. I would name it like the method
capability:predict_proba
i think its cases such as 1-NN no @TonyBagnall ?
Describe the feature or idea you want to propose
as requested here https://github.com/aeon-toolkit/aeon/issues/1587 it would be nice to have a way of testing if classifiers (and maybe clusterers) can return meaningful probability estimates rather than default 0/1/
Describe your proposed solution
add a tag, maybe capability:predict_probas?
Describe alternatives you've considered, if relevant
No response
Additional context
No response