dongxingning / SHA-GCL-for-SGG

Code for paper "Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased Scene Graph Generation"
MIT License
32 stars 5 forks source link

Unable to reproduce the result reported in paper (Transformer and MOTIFS) #6

Closed Nora-Zhang98 closed 2 years ago

Nora-Zhang98 commented 2 years ago

Hi author, thanks for your excellent work. I am trying to reproduce your work but I find the mR@K metric cannot achieve the results in your paper. I test SHA-GCL and MOTIFS-GCL for predcl, but they are all lower than the metrics in your paper. For SHA-GCL, I train 60000 iters, base lr 0.001, batch size 16. The result is for SHA-GCL. Extra hyperparameters are shown as below. Could you please share your hyperparameters so I can reimplement your work? Thanks!

无标题2 无标题
dongxingning commented 2 years ago

Well, that is quite strange. It seems that your reproduced results are much lower than ours (both Recall and Mean Recall). Do you change some parts of our code? I guess the main reason may not lie in the hyper-parameters.

Nora-Zhang98 commented 2 years ago

Thanks for your reply. In fact, I didn't change anything of your code and hyper-parameters. I experimented on predcl, so I did a little change here. Could this be the cause?

无标题
dongxingning commented 2 years ago

Maybe you can run the configuration "MOTIFS" without any GCL decoder (MOTIFS baseline in PredCls)? If your result on MOTIFS-PredCls is normal, you can zip your code and send me to check, thanks a lot.

Nora-Zhang98 commented 2 years ago

I will follow your suggestion to test again. Thank you for your suggestion!

Nora-Zhang98 commented 2 years ago

Dear author, I follow your suggestion and run MOTIFS without GCL for PredCl. The results are normal. How should I send you the code? Thanks!

无标题
dongxingning commented 2 years ago

You can email me at 1463365882@qq.com (recommended) or dongxingning1998@gmail.com, thanks!

Nora-Zhang98 commented 2 years ago

I have already sent my code to your gmail. Thanks for your help.

Nora-Zhang98 commented 2 years ago

Oh I didn't notice the recommanded, I sent code again to your qq email.

dongxingning commented 2 years ago

Well, I have checked your log files of your reproduced SHA-GCL; it seems that you set the batch size to 2 rather than your claimed 16. Thus, though you run 60000 steps, it is actually equal to 15000 iters at the setting of batch 8, the model does not converge at these steps, so it is very normal that your result is much lower than us. Besides, we downloaded the code from Github and re-runed the model again. Here is our result on the validation set at 45000 steps, and we stopped it because it proves that our results could be reproduced. 1657973415100

Nora-Zhang98 commented 2 years ago

Oh you are right. I'm used to reading the code by sentence in a debug way and changing the hyperparameters manually. I only set batch_size in maskrcnn_benchmark/config/default.py as 16, and I didn't notice batch_size in SHA_GCL_e2e_relation_X_101_32_8_FPN_1x.yaml. I will test again, thanks you for your help!