Closed sixhobbits closed 7 years ago
Prediction of the siamese model in this example is distance between pairs, therefore the calculation of accuracy is taking "distance (predictions) < 0.5" as True.
Ok I picked a bad example, but I still believe that the function has problems.
import numpy as np
labs = np.array([0,1,0,1,0,1,0,1,0,1])
pred = np.array([1,1,1,1,1,1,1,1,1,0])
print(compute_accuracy(pred, labs))
# prints 1.0
import numpy as np
labs = np.array([0,1,0,1,0,1,0,1,0,1])
pred = np.array([0,0,0,0,0,0,0,0,0,1])
print(compute_accuracy(pred, labs))
# prints 0.4444
import numpy as np
labs = np.array([0,1,0,1,0,1,0,1,0,1])
pred = np.array([1,1,1,1,1,1,1,1,1,1])
print(compute_accuracy(pred, labs))
acc.py:4: RuntimeWarning: Mean of empty slice.
return labels[predictions.ravel() < 0.5].mean()
/usr/local/lib/python3.6/site-packages/numpy/core/_methods.py:80: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
nan
Surely this is not correct?
Agree, the old one actually is precision, and accuracy should be
def compute_accuracy(predictions, labels):
return np.mean(np.equal(predictions.ravel() < 0.5, labels))
# * Accuracy on training set: 99.53%
# * Accuracy on test set: 97.03%
What about the loss function, and similarity labels? Is it correct or not? @joelthchao @sixhobbits
@aryopg Should be correct. Any concern?
i got a suggestion from stackoverflow to "modify" the equation of contrastive loss and the sequence of the labels just like in this paper. But when i tried to aplly it, it gives me a bad result. But when i use the original keras example, it gives me a good result. Can you explain how can the code given by keras can accomodate the equation from the paper while the other code cannot?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
The siamese network example for mnist (https://github.com/fchollet/keras/blob/master/examples/mnist_siamese_graph.py) uses the following function to compute accuracy
This seems flawed to me. For example:
Surely the order of the predictions is important? In the example above, all of the predictions are different from the labels, and yet the compute_accuracy returns 100%.
Am I missing something obvious, or is this a mistake.
[X] Check that you are up-to-date with the master branch of Keras. You can update with: pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
[x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.
[X] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with: pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
[X] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).