hitachi-rd-cv / qpic

Repo for CVPR2021 paper "QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information"
Apache License 2.0
131 stars 33 forks source link

How to pretrain detector for VCOCO? #4

Closed weiyunfei closed 3 years ago

weiyunfei commented 3 years ago

Hi, thanks for your excellent work. I am confused about the pre-training for the V-COCO training. In your README.md, you stated the pretraining has to be carried out for the V-COCO training since some images of the V-COCO evaluation set are contained in the training set of DETR, while the given training command just used the pretrained DETR model without any more pretraining. Should I apply a pretraining on the dataset excluding the V-COCO evaluation images or just follow your training command?

tamtamz commented 3 years ago

You can follow the training instruction, but have to prepare the pre-trained model by yourself before the training. Follow the steps below.

  1. Exclude the images in the V-COCO evaluation set from the annotation file of the COCO train2017 set.
  2. Train DETR with the annotation file.
  3. Convert the trained parameters of DETR for QPIC's training.
  4. Start training with the converted parameters.
weiyunfei commented 3 years ago

Would you like to release the pre-trained parameters for V-COCO?

tamtamz commented 3 years ago

We do not plan to release the pre-trained parameters. Please pre-train by yourself.