saksham-s / SparseDet

Official Repository for the ICCV 2023 paper, SparseDet: Improving Sparsely Annotated Object Detection with Pseudo-positive Mining
8 stars 0 forks source link

About reproduced results #2

Open YAOSL98 opened 2 months ago

YAOSL98 commented 2 months ago

Your paper has been very helpful to me, but I am unable to reproduce the results mentioned in the paper. For example, in the splits5_50% setting, my experimental result for AP50 only reaches 64, whereas in your paper it is 74. In the splits4_extreme setting, my reproduced result is approximately 69, while in your paper it is 75.6. I have not changed any parameter values in your code. Could you help me speculate on the reasons why my experimental results are lower?

YAOSL98 commented 2 months ago

image

Here are the results from my experiments under the splits4_extreme setting.

saksham-s commented 2 months ago

Hi, I just reran both the experiments for split4_extreme as well as split5_50p with the splits and code uploaded in this repo and got 75.6712 and 73.9923 AP50 respectively which is around what's reported in the paper (there is some non-deterministic behaviour). I am not sure why you are getting a very different number. Are you sure you have not changed anything?

I am also attaching my current environment's requirements file in case their is some issue with your environment. requirements.txt

YAOSL98 commented 2 months ago

Hi, I just reran both the experiments for split4_extreme as well as split5_50p with the splits and code uploaded in this repo and got 75.6712 and 73.9923 AP50 respectively which is around what's reported in the paper (there is some non-deterministic behaviour). I am not sure why you are getting a very different number. Are you sure you have not changed anything?

I am also attaching my current environment's requirements file in case their is some issue with your environment. requirements.txt

I am sure I did not modify any code. I used the configuration commands from the README. The only difference is that I generated the JSON file for inference myself (using the VOC2007 test dataset for inference). This is the JSON file I generated. Could you please help me check if it's correct, or provide the JSON file you used for testing? Thanks a lot voc_test.json

saksham-s commented 2 months ago

I have added my test json to the google drive link where the splits are. Try to use it and see if that works. Given that I register the jsons as coco instances even for VOC, updating the json file to the one I uploaded might just solve the issue.

YAOSL98 commented 2 months ago

I have added my test json to the google drive link where the splits are. Try to use it and see if that works. Given that I register the jsons as coco instances even for VOC, updating the json file to the one I uploaded might just solve the issue.

Based on the JSON file you provided, I found the performance issue. The number of GPUs I used is 2 (while it is 4 in your paper). After increasing the value of SOLVER.IMS_PER_BATCH, the performance improved significantly. Again, thank you for your patient help and your excellent paper.

epistimi22 commented 10 minutes ago

The same issue also troubles me. I reproduced your experiment on split4_hard dataset following your settings in README except num_gpu. You used 4 gpus while I used 2, but IMS_PER_BATCH is 8. After 18K iters training, the results on test data is 74.450 AP50 (81.50 in your paper). I also generated the json file of test data, and I noticed you've uploaded the voc_test.json file. So I checked these two files and I found that there are 14976 objects in my json file wheares 12032 in yours. But according to the xml files, there should be 14976 objects in test data. I'm not sure whether the difference in our results is due to the difference in the number of objects in voc_test json file [Uploading voc_test.json…]() .

YAOSL98 commented 7 minutes ago

您好!您发给我的信件已经收到,我将尽快处理