HCPLab-SYSU / SSGRL

161 stars 32 forks source link

I can only obtain 83.2mAP #16

Closed phython96 closed 4 years ago

phython96 commented 4 years ago

Try my best, I can only obtain 83.2mAP in mcCOCO. Is anyone else who get higher mAP? What should I notice?

ziyanyang commented 4 years ago

Try my best, I can only obtain 83.2mAP in mcCOCO. Is anyone else who get higher mAP? What should I notice?

Hi, could you share the hyper parameters you used to get mAP 83.2? Did you change any parameters in main_coco.sh?

phython96 commented 4 years ago

Try my best, I can only obtain 83.2mAP in mcCOCO. Is anyone else who get higher mAP? What should I notice?

Hi, could you share the hyper parameters you used to get mAP 83.2? Did you change any parameters in main_coco.sh?

NO, I changed nothing. Just run the initial code. And I try 10 times, the best mAP is 83.2%. Maybe authors utilize other tricks that boosts the performance. So, whats your experimental results? mAP?

ziyanyang commented 4 years ago

Try my best, I can only obtain 83.2mAP in mcCOCO. Is anyone else who get higher mAP? What should I notice?

Hi, could you share the hyper parameters you used to get mAP 83.2? Did you change any parameters in main_coco.sh?

NO, I changed nothing. Just run the initial code. And I try 10 times, the best mAP is 83.2%. Maybe authors utilize other tricks that boosts the performance. So, whats your experimental results? mAP?

I can only get around 80%, but I just try once. In their code, they use pre-trained resnet101, but they didn't provide it. Thus I use pytorch's pre-trained resnet101, did you use it as well?

phython96 commented 4 years ago

Try my best, I can only obtain 83.2mAP in mcCOCO. Is anyone else who get higher mAP? What should I notice?

Hi, could you share the hyper parameters you used to get mAP 83.2? Did you change any parameters in main_coco.sh?

NO, I changed nothing. Just run the initial code. And I try 10 times, the best mAP is 83.2%. Maybe authors utilize other tricks that boosts the performance. So, whats your experimental results? mAP?

I can only get around 80%, but I just try once. In their code, they use pre-trained resnet101, but they didn't provide it. Thus I use pytorch's pre-trained resnet101, did you use it as well?

yes

ziyanyang commented 4 years ago

It's so weird since in their link I can only see one data.zip file but no resnet model. lol

jasonseu commented 4 years ago

I reimplement this model with pytorch1.1 and use pytorch's pre-trained resnet101, and get 84.3% on validation dataset and 83.6% on test dataset. I'm confused whether it is fair to use test dataset as validation dataset in author's codebase.

phython96 commented 4 years ago

I reimplement this model with pytorch1.1 and use pytorch's pre-trained resnet101, and get 84.3% on validation dataset and 83.6% on test dataset. I'm confused whether it is fair to use test dataset as validation dataset in author's codebase.

Does test dataset have annotations? You mean, you get 84.3% mAP on validation dataset which is better than the paper?

jasonseu commented 4 years ago

I reimplement this model with pytorch1.1 and use pytorch's pre-trained resnet101, and get 84.3% on validation dataset and 83.6% on test dataset. I'm confused whether it is fair to use test dataset as validation dataset in author's codebase.

Does test dataset have annotations? You mean, you get 84.3% mAP on validation dataset which is better than the paper?

Actually, instances_val2014 is used as test dataset, while instances_train2014 is randomly splitted out 5000 samples as validation dataset with the remaining samples as training dataset. The 84.3% and 83.6% mAP are achieved on the validation dataset and test dataset respectively.

phython96 commented 4 years ago

I reimplement this model with pytorch1.1 and use pytorch's pre-trained resnet101, and get 84.3% on validation dataset and 83.6% on test dataset. I'm confused whether it is fair to use test dataset as validation dataset in author's codebase.

Does test dataset have annotations? You mean, you get 84.3% mAP on validation dataset which is better than the paper?

Actually, instances_val2014 is used as test dataset, while instances_train2014 is randomly splitted out 5000 samples as validation dataset with the remaining samples as training dataset. The 84.3% and 83.6% mAP are achieved on the validation dataset and test dataset respectively.

I understand, thank you. Another problem, have you run authors' code? what's the mAP? If there is a difference, maybe it's on version?

jasonseu commented 4 years ago

I reimplement this model with pytorch1.1 and use pytorch's pre-trained resnet101, and get 84.3% on validation dataset and 83.6% on test dataset. I'm confused whether it is fair to use test dataset as validation dataset in author's codebase.

Does test dataset have annotations? You mean, you get 84.3% mAP on validation dataset which is better than the paper?

Actually, instances_val2014 is used as test dataset, while instances_train2014 is randomly splitted out 5000 samples as validation dataset with the remaining samples as training dataset. The 84.3% and 83.6% mAP are achieved on the validation dataset and test dataset respectively.

I understand, thank you. Another problem, have you run authors' code? what's the mAP? If there is a difference, maybe it's on version?

Yes, I have run author's code and achieved 83.5% mAP, just slightly higher than yours.

gaobb commented 4 years ago

Try my best, I can only obtain 83.2mAP in mcCOCO. Is anyone else who get higher mAP? What should I notice?

Hi, could you share the hyper parameters you used to get mAP 83.2? Did you change any parameters in main_coco.sh?

NO, I changed nothing. Just run the initial code. And I try 10 times, the best mAP is 83.2%. Maybe authors utilize other tricks that boosts the performance. So, whats your experimental results? mAP?

how many epochs?

gaobb commented 4 years ago

Try my best, I can only obtain 83.2mAP in mcCOCO. Is anyone else who get higher mAP? What should I notice?

I have run 19 epoch with the original setting (e.g, input size of 576x576 ), the best is 83.29%, the last epoch is 82.75%. The detailed log is as follows: (0, 0.77675844072589795) (1, 0.80197100261401888) (2, 0.81360170863638648) (3, 0.81714328451643214) (4, 0.82220279705197075) (5, 0.82580931340919206) (6, 0.82920109972345735) (7, 0.83026659119711577) (8, 0.83208821652251108) (9, 0.83171833113887228) (10, 0.83291023816114718) (11, 0.83129754386284915) (12, 0.82987461581319155) (13, 0.83252176016256652) (14, 0.83017887428257153) (15, 0.82961072122217439) (16, 0.8284227592341582) (17, 0.82772520403823646) (18, 0.82753023354508048)