Right now, the zero value is allowed when predicting triplets, because of np.sign wich returns -1 if the input is below zero, +1 if the input is higher than zero, and 0 if the input is zero.
Triplets of the form: (x, y, y), (x, x, x) and (x, u, v), with u and v orthogonal to x with the same norm, cause trouble.
This is because the distance between x and the second point, x and third point is the same, causing decision_function to be zero, and then causing predict to be zero in the _TripletsClassifierMixin.
For bilinear similarity it's even more relaxed, as the norm of u and v does not need to be the same.
By definition, the semantic of a triplet says that the distance between the first and the second point should be strictly higher than the distance between the first and the third point, then any zero prediction is invalid.
I propose to force zero predictions to be -1 instead, as the definition of a triplet is not satisfied.
In practice, there should not be that many border cases.
Right now, the zero value is allowed when predicting triplets, because of
np.sign
wich returns -1 if the input is below zero, +1 if the input is higher than zero, and 0 if the input is zero.Triplets of the form: (x, y, y), (x, x, x) and (x, u, v), with u and v orthogonal to x with the same norm, cause trouble.
This is because the distance between x and the second point, x and third point is the same, causing
decision_function
to be zero, and then causingpredict
to be zero in the_TripletsClassifierMixin
.For bilinear similarity it's even more relaxed, as the norm of u and v does not need to be the same.
By definition, the semantic of a triplet says that the distance between the first and the second point should be strictly higher than the distance between the first and the third point, then any zero prediction is invalid.
I propose to force zero predictions to be -1 instead, as the definition of a triplet is not satisfied.
In practice, there should not be that many border cases.
Finally, I also made a test to show this problem.