hbai98 / SCM

MIT License
28 stars 7 forks source link

Results cls_acc_top-1、cls_acc_top-5、loc_acc_top-1、loc_acc_top-5、GT_Known in the paper #1

Open rjy-fighting opened 2 years ago

rjy-fighting commented 2 years ago

Hello! First of all, your work is excellent! Congratulations on a good result! Second, I'm very interested in your work, and there's a problem running the code.

The results cls_acc_top-1、cls_acc_top-5、loc_acc_top-1、loc_acc_top-5 and GT_Known using the source code on CUB are different from those in the paper. Compared to the paper results, I used the same settings, but the results decreased to varying degrees about 3%~4%.

Hope to get your reply and answer, thank you!

hbai98 commented 2 years ago

Hi! Thanks for your comments.

My suggestions are:

First, check the Results and Models. The pre-trained models, configs, and logs are uploaded to Google Drive. You can download them and compare them with yours to see if there's something different.

Second, it may be a normal issue as the variance is relatively small.

rjy-fighting commented 2 years ago

Hello! I have checked the pre-trained models and made sure the configs are the same. But the results are still about 2% below those in the paper.

In addition, I would like to ask you a question. The binary generated after testing has scattered regions and roi_images does not activate the target region. Why? How should I adjust? Laysan_Albatross_0021_737 Laysan_Albatross_0021_737

I can't find the reason right now. Could you please help me?

Looking forward to your reply! Thank you!

hbai98 commented 2 years ago

Hi!

I test my pretrained module by the command on CUB: python tools_cam/test_cam.py --config_file ./configs/CUB/deit_scm_small_patch16_224.yaml --resume {the path of pretrained model.best} The result is:

cls_acc_top-1 : 78.44(reported: 78.5)
cls_acc_top-5 : 94.46(reported: 94.5)

GT-Known_top-1 : 96.72(reported: 96.6)

loc_acc_thr_50.00_top-1 : 76.32(reported: 76.4)
loc_acc_thr_50.00_top-5 : 91.63(reported: 91.6)

It seems the result is correct.

I guess the corrupted output image or the suboptimal performance may be due to the smaller batch_size as I mentioned in this link

As I guess lots of people don't have enough memory, so in this file configs/CUB/deit_scm_small_patch16_224.yaml: I set:

TRAIN:
  BATCH_SIZE: 16

instead of the previous configuration, I gave in the config

You can recheck it to see if it's the batch size issue.