Closed wangkua1 closed 1 month ago
Hi Jackson,
Glad you like the project! I couldn't reproduce the issue, but I’ve uploaded example images (same scenario you mentioned) and updated run_metric.py
with the correct paths. It should work now—please re-clone the repo and give it a try.
Best,
Hey Storme!
Thanks so much for your quick reply! Okay, it works perfectly now ...
It's embarrassing to say what happened, but basically it was a bug I introduced.
I wanted to loop over N query images for evaluations, just for convenience.
The function calculate_score
had this line
context_images.append(Image.open(query_image))
, which modifies context_images in-place. This means that in the 2nd iteration, my context_images has the query_image from the last iteration.
So, being a genius, I did this
context_images = context_images.append(Image.open(query_image))
which actually made context_images
None, since append doesn't return anything. And tragically, processor didn't complain, and model failed silently.
So now I do this
context_images = context_images.copy()
context_images.append(Image.open(query_image))
Anyways ... thanks a lot for your help!! I can now replicate the scores. I'll close the issue!
Dear Authors,
Thank you for your excellent work on this project! I'm super interested in using the learned metric network. However, I’ve encountered an issue where the predicted scores seem to be independent of the input queries.
In the provided example script (
run_metric.py
), it requires images in theliked
anddisliked
folders, which I couldn't locate. To work around this, I cropped the individual images fromassets/scores.png
. I used the 8 Liked images from User2 as the positive images, and 8 Liked images from User1 as the negative images. I tested the 4 query images, but they all scored around 0.5 (specifically, 0.47 for the top 2 images and 0.51 for the bottom 2).I would greatly appreciate any guidance you could provide to help identify the source of this issue!
Thanks, Jackson