Closed sfxeazilove closed 10 months ago
Hi,
Please is there a simple evaluation script that helps you compare two images together without having to put them in a folder, like eval_folder.py. for inference purposes.
Hi,
You can use the below. Check and let me know:
@HamadYA Thank you for your feedback.
I tried to evaluate SIMILAR IMAGES with the way you directed in the zipped file and i got this result:
However when i did the same thing for NON SIMILAR IMAGES i got the dist
as:
0.9999998 1.0000001
0.34829
It is quite hard to obtain a threshold for them as both similar and non similar comparisons have little difference in range. Is there a defined threshold for the model?
I understand the process involved in eval_folder.py which does the normalised summation of the embeddings in a class and compares with the other embeddings. register_base_dist = self.dist_func(self.embs, register_base_emb)
.
so definitely for the first class, it is bound to have a high score as to when it is compared against other classes because the first class embeddings has been sumed up and normalised. This is with respect to the do_evaluation
method in the eval_folder.py
For 3 images each in 5 classes, i get this result using eval_folder.py
:
for comparisons between classes there is a defined difference, and you can set a threshold, but how do you replicate this kind of good result when comparing just two images?
@HamadYA Thank you for your feedback.
I tried to evaluate SIMILAR IMAGES with the way you directed in the zipped file and i got this result:
However when i did the same thing for NON SIMILAR IMAGES i got the
dist
as: 0.9999998 1.0000001 0.34829It is quite hard to obtain a threshold for them as both similar and non similar comparisons have little difference in range. Is there a defined threshold for the model?
I understand the process involved in eval_folder.py which does the normalised summation of the embeddings in a class and compares with the other embeddings.
register_base_dist = self.dist_func(self.embs, register_base_emb)
.so definitely for the first class, it is bound to have a high score as to when it is compared against other classes because the first class embeddings has been sumed up and normalised. This is with respect to the
do_evaluation
method in theeval_folder.py
For 3 images each in 5 classes, i get this result using
eval_folder.py
:for comparisons between classes there is a defined difference, and you can set a threshold, but how do you replicate this kind of good result when comparing just two images?
Hi,
Sorry, test this one, the closer to 1 means same person
@HamadYA , Thank you for your response, However I am still getting the same range of scores when i test similar images together and non similar images together.
I have added the two kinds of images i am testing with for your perusal, hopefully you check them out and help to remedy it. test_images.zip
Looking forward to your response, Thanks
Hi @HamadYA , Any response or feedback on this to help?
@HamadYA , Thank you for your response, However I am still getting the same range of scores when i test similar images together and non similar images together.
Hi,
Sorry been extremely busy with research. Please refer to this similar issue: https://github.com/leondgarse/Keras_insightface/issues/128
Hi,
Please is there a simple evaluation script that helps you compare two images together without having to put them in a folder, like eval_folder.py. for inference purposes.