DirtyHarryLYL / Transferable-Interactiveness-Network

Code for Transferable Interactiveness Knowledge for Human-Object Interaction Detection. (CVPR'19, TPAMI'21)
MIT License
228 stars 41 forks source link

Bad evaluation results #76

Closed blueballoonboom closed 3 years ago

blueballoonboom commented 3 years ago

Hello , the evaluation results are bad for either the downloaded pre-trained model or model trained with provided code on HICO-DET dataset. These are my results: Training results on a 2080Ti: setting: def exp_name: rcnn_caffenet_ho_pconv_ip1_s score_blob: n/a mAP / mRec (full): 0.0190 / 0.1635 mAP / mRec (rare): 0.0144 / 0.1403 mAP / mRec (non-rare): 0.0203 / 0.1704

setting: ko exp_name: rcnn_caffenet_ho_pconv_ip1_s score_blob: n/a mAP / mRec (full): 0.0309 / 0.1635 mAP / mRec (rare): 0.0242 / 0.1403 mAP / mRec (non-rare): 0.0328 / 0.1704 Test with the pre-trained model on HICO-DET: setting: def exp_name: rcnn_caffenet_ho_pconv_ip1_s score_blob: n/a mAP / mRec (full): 0.0192 / 0.1641 mAP / mRec (rare): 0.0190 / 0.1432 mAP / mRec (non-rare): 0.0192 / 0.1704

score_blob: n/a mAP / mRec (full): 0.0307 / 0.1641 mAP / mRec (rare): 0.0281 / 0.1432 mAP / mRec (non-rare): 0.0315 / 0.1704

Environment: Python 3.6 datasets 1.9.0 easydict 1.8 h5py 3.1.0 numpy 1.19.5 opencv-python 3.4.3.18 pandas 1.1.5 Pillow 8.3.1 pip 21.1.3 scipy 1.5.2 six 1.16.0 tensorflow 1.12.0 tensorflow-gpu 1.12.0 utils 0.9.0 Looking forward to your reply!

DirtyHarryLYL commented 3 years ago

Limited clues to dig. Is the "pretrained model" the COCO pretrained one? Without fine-tuning, this model indeed performs badly. Finetuning costs a lot of time, we usually tune it for 2-3 days, sometimes even more (as the one batch one image scheme). If you want to do a quicker fine-tuning, you could also start from the iCAN model, as TIN can directly load iCAN's trained model.