flyinglynx / Bilinear-Matching-Network

Official implementation for CVPR 2022 paper "Represent, Compare, and Learn: A Similarity-Aware Framework for Class-Agnostic Counting".
MIT License
69 stars 9 forks source link

Fix the bug in BMNet about getting image class labels #6

Closed Future-Outlier closed 1 year ago

Future-Outlier commented 2 years ago

I'v tried to download the code and run it with fsc147 dataset. I found that entry should be 0 and the code can run. BTW, I am the researcher from SINICA which has contacted to Min Shi recently. Thanks for your group about helping us have big progress on our research.

flyinglynx commented 2 years ago

Hi, thank you so much for helping us improving the code! However, I test this function and it works normal. Can you offer some more information for me? Please check the class_file argument for function get_image_classes in FSC147Dataset.py and make sure it is $DATAROOT/ImageClasses_FSC147.txt.

In line 19, entry should be a list with the length of 2. Its first element is the filename while the second element is the category. Hence, the function can return a dict, whose keys are all the file names, and the corresponding values are categories. If modify to class_dict[entry[0]]=entry[0], this function will be meaningless.

If you still have any troubles, you can remove this function and relevant variables in dataset codes, as they are not used in training. We include these codes as someone who are interested may use the category information for further analysis. You can also add more detailed illustration about the problem you have encountered, so we can improve the code together!

I am Min Shi. It is great that my responses can help you and your research.

Future-Outlier commented 2 years ago

Thank you very much for the reply. I'v checked the code and find that it's because my file $DATAROOT/ImageClasses_FSC147.txt. has some error. BTW, I have found that the evaluate metric is RMSE, not MSE in your evaluation code, and the paper is wrong about the metric, it should be RMSE, not MSE. Thank you very much for the reply.

flyinglynx commented 2 years ago

Thank you very much for the reply. I'v checked the code and find that it's because my file $DATAROOT/ImageClasses_FSC147.txt. has some error. BTW, I have found that the evaluate metric is RMSE, not MSE in your evaluation code, and the paper is wrong about the metric, it should be RMSE, not MSE. Thank you very much for the reply.

Hi, yes, the evaluation metric is actually rooted mean square error (RMSE). Personally, I really agree with you that we should use RMSE instead of MSE. However, counting paper use MSE more than RMSE to denote this rooted mean square error. E.g., MAN CVPR 2022 and BL ICCV 2019. So we retain this "tradition". We omitted the metric definition for MSE in our paper, which may be misleading to those who are not familiar with counting. We will consider using RMSE in future works. Thank you for suggestions!