xiaomabufei / FGAHOI

29 stars 4 forks source link

About the results with V-COCO dataset #10

Closed KentaroTakemoto closed 1 year ago

KentaroTakemoto commented 1 year ago

Thank you very much for sharing your great work with us!

I’m trying to use your method with HICO-DET and V-COCO.

While I obtained a coequal result with HICO-DET to your paper (mine is 36.9 AP), my result with V-COCO is not as good as yours (my result with Swin-transformer-large based FGAHOI is around 45 AP in Scenario 2).

The details of my experiment with V-COCO are as follows.

Only the modification I made with your code is about the part for calculating auxiliary losses in DABDETR.py; I did this because the original code seems to cause RuntimeError in my environment. I attached a zip file that contain the actual modified file (I modified lines 188, 192, 375 and 492) and scripts to train the networks for each stage. codes.zip

For base stage, I downloaded pre-trained swin-transformer-large from official github.

It would be greatly appreciated if you would respond to the following two requests.

  1. Could you tell me the detailed settings of your training for V-COCO (including the learning rate drop you mentioned in your paper)?

  2. If you come up with any possible reasons for the difference in the results, please also let me know.

Thank you.

xiaomabufei commented 1 year ago

Thank you for your interest in our work. For V-COCO, the model is first trained on the COCO dataset for object detection, then fintune the model for HOI detection.

xiaomabufei commented 1 year ago

Meanwhile, --obj is required in the run shell file.

KentaroTakemoto commented 1 year ago

Thank you. I'll give it a try.

KentaroTakemoto commented 1 year ago

Thanks to your advice, I managed to get almost the same accuracy as yours. Thank you very much!

YuxiaoWang-AI commented 1 year ago

hello, i would like to ask if you have made this error: "RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel; (2) making sure all forward function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable)."