microsoft / Oscar

Oscar and VinVL
MIT License
1.04k stars 251 forks source link

Error while running Image/Text retrieval inference task #104

Open Prat1510 opened 3 years ago

Prat1510 commented 3 years ago

I am trying to run the oscar/run_retrieval.py using the base-vg-label oscar model and 1k coco test set.

python oscar/run_retrieval.py \ --do_test \ --do_eval \ --test_split test \ --num_captions_per_img_val 5 \ --eval_img_keys_file datasets/coco_ir/test_img_keys_1k.tsv \ --cross_image_eval \ --per_gpu_eval_batch_size 64 \ --eval_model_dir pretrained_models/base-vg-labels/ep_67_588997

The following is the error raised(output on the terminal):-

2021-06-02 09:08:31,344 vlpretrain WARNING: Device: cpu, n_gpu: 0 2021-06-02 09:08:31,344 vlpretrain INFO: output_mode: classification, #Labels: 2 2021-06-02 09:08:31,370 vlpretrain INFO: Evaluate the following checkpoint: pretrained_models/base-vg-labels/ep_67_588997 Traceback (most recent call last): File "oscar/run_retrieval.py", line 664, in main() File "oscar/run_retrieval.py", line 620, in main model = model_class.from_pretrained(checkpoint, config=config) File "/export/home/chaudhury/Oscar/transformers/pytorch_transformers/modeling_utils.py", line 450, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/export/home/chaudhury/Oscar/oscar/modeling/modeling_bert.py", line 297, in init self.loss_type = config.loss_type AttributeError: 'BertConfig' object has no attribute 'loss_type'

I checked the oscar/modeling/modeling_bert.py file where it tries to access the loss_type attribute of the BertConfig object. In this file, it imports BertConfig from transformers.pytorch_transformers.modeling_bert. So, I checked out the BertConfig object class in transformers/pytorch_transformers/modeling_bert.py and it did not have the loss_type attribute. Since there is no loss_type attribute, it seems that attribute error is bound to arise.

How to solve this problem? Please help.

Oscar/oscar/modeling/modeling_bert.py - https://github.com/microsoft/Oscar/blob/master/oscar/modeling/modeling_bert.py Oscar/transformers/pytorch_transformers/modeling_bert.py - https://github.com/huggingface/transformers/blob/067923d3267325f525f4e46f357360c191ba562e/pytorch_transformers/modeling_bert.py

cppntn commented 3 years ago

Could you try

pip install --upgrade transformers?

SwatiTiwarii commented 3 years ago

Could you try

pip install --upgrade transformers?

@antocapp I am also facing similar issue. according to my understanding https://github.com/microsoft/Oscar is pointing to specific transformers branch. If we upgrade to latest transformers , it will break other code parts.

Prat1510 commented 3 years ago

@SwatiTiwarii @antocapp Yes, the latest transformers repo is quite different. One clear observation is that the specific transformers branch which Oscar points, has a pytorch_transformers sub-folder which is also used for importing in many python codes in Oscar repo whereas the latest transformers doesn't have any pytroch_transformers sub-folder. So, it seems that the Oscar code is not compatible with the latest transformers branch.

Prat1510 commented 3 years ago

Could you try pip install --upgrade transformers?

@antocapp I am also facing similar issue. according to my understanding https://github.com/microsoft/Oscar is pointing to specific transformers branch. If we upgrade to latest transformers , it will break other code parts.

Please let me know if you have found a workaround to this problem, or when you find one. Also, if it's possible, could you describe the task in which you encountered a similar error?

iacercalixto commented 3 years ago

Same problem here. Any news on the matter?

CQUTWangHong commented 2 years ago

I see why.

You can track the source code run retrieval.py ,Line 600: `if args.do train:`

This if condition has assign value to

config.loss_ type = args.loss_ Type

And we're do_test,

and the following else has no assign value to config.loss_ Type

so we just add the code

config.loss_ type = args.loss_ Type

after

config = config_class.from_pretrained(checkpoint)

@iacercalixto @Prat1510 @SwatiTiwarii