jackroos / VL-BERT

Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".
MIT License
738 stars 110 forks source link

Query regarding Pretrained model #3

Closed prajjwal1 closed 4 years ago

prajjwal1 commented 4 years ago

Hi,

In Pre-training VL-BERT section, you've highlighed some tasks on which model was trained.

  1. Pretraining on Conceptual Conceptions
  2. Masked Language Modeling with Visual Clues
  3. Masked RoI Classification with Linguistic Clues

I had some questions:

  1. Is the pretrained model which you've provided was trained on all 3 objectives ? If yes, should it be used for finetuning now ? If not, do I need to perform pretrainining ?
  2. Can you please clarify the order of usage of datasets in all three objectives ? (As in in which dataset was used to satisfy the objective)
  3. In the code, I'm seeing that the pretrained models dict partially updates the weights of the instantiated model. Why so ? Are there some parts of the model that are initialized as mentioned in the paper ?
jackroos commented 4 years ago
  1. Actually there are only two tasks in pre-training: Masked Language Modeling with Visual Clues, Masked RoI Classification with Linguistic Clues. Sure, the pre-trained models provided in this repo have been pre-trained on these tasks, you don't need to perform pre-training by yourself.
  2. We use Conceptual Captions for both tasks, and English Wikipedia & BookCorpus only for Masked Language Modeling task.
  3. Could you provide the detailed link to the line of code since I am not sure where do you exactly mean?

Thanks!

prajjwal1 commented 4 years ago

Thanks for the reply.

  1. For (3), I meant this line of code. Is it only partially loading the weights ?

I have some more questions.

  1. Could you also share the weights for fast rcnn module ? It seems that the pretrained model is meant for ResNetVLBert only.
  2. Do I need to train fast rcnn on Visual Genome for my task?
  3. I want to port fast rcnn to torchvision's faster rcnn. What does obj_reps actually represent here, are these the box predictions coming out of ROI Head ? And you seem to be using 'cnn_regularization' loss which is cross entropy loss for classes, what about MSE loss for regression of bounding boxes ? I think MSE loss would be necessary for improving region proposals.
  4. If I were to port to torchvision's faster_rcnn implementation, please see here, Will detections from self.transform.postprocess be 'obj_reps' and cross entropy loss be 'cnn_regularization' loss ?
jackroos commented 4 years ago

@prajjwal1

  1. The partial loading is because downstream task and pre-training task may have different prediction heads, and only common weights (including the Fast RCNN, VL-BERT, and maybe some common heads) are loaded.
  2. Actually, the weights of Fast RCNN is included: https://github.com/jackroos/VL-BERT/blob/28cbde93de9e9670fdcdf2b75af2855dafe1e280/pretrain/modules/resnet_vlbert_for_pretraining_multitask.py#L19 5/6/7. Here I think you have some misunderstanding about Fast RCNN in our model, it's not Faster RCNN, so there is no RPN in it. Actually, our workflow is:

P.S. The obj_reps refers to visual feature of each RoI used in VL-BERT, obj_reps_raw means RoI features coming out of RoI head, and the cnn_regularization is deprecated, we don't use it in both pre-training and fine-tuning.