dandelin / ViLT

Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"
Apache License 2.0
1.36k stars 208 forks source link

AttributeError: module 'vilt' has no attribute 'modules' #20

Closed leonodelee closed 2 years ago

leonodelee commented 3 years ago

I run into an error

File "run.py",line 6, in from vilt.modules import ViLTransformerSS File "ViLT/vilt/moudles/vilt/moudules/init.py",line 1, in form .vilt_module import ViLTransformerSS File "ViLT/vilt/moudles/vilt_moudule.py",line 4, in import vilt.module.vision_transformer as vit AttributeError: module 'vilt' has no attribute 'modules'

when I run the "Evaluate VQAv2" command

dandelin commented 3 years ago

@leonodelee

Hi! please try this import statement instead -> from vilt.modules import vision_transformer as vit

leonodelee commented 3 years ago

Thank you for your kindness help, but I got anther problem when I run the evaluation code:

python run.py with data_root=/home/ViLT/data/VQAv2/vqa_final_data num_gpus=2 num_nodes=1 per_gpu_batchsize=8 task_finetune_vqa_randaug test_only=True load_path="weights/vilt_vqa.ckpt"

The information are list as below:

WARNING - root - Changed type of config entry "max_steps" from int to NoneType WARNING - ViLT - No observers have been added to this run INFO - ViLT - Running command 'main' INFO - ViLT - Started Global seed set to 0 INFO - lightning - Global seed set to 0 GPU available: True, used: True INFO - lightning - GPU available: True, used: True TPU available: None, using: 0 TPU cores INFO - lightning - TPU available: None, using: 0 TPU cores LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7] INFO - lightning - LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7] Using native 16bit precision. INFO - lightning - Using native 16bit precision. Missing logger folder: result/finetune_vqa_randaug_seed0_from_vilt_vqa WARNING - lightning - Missing logger folder: result/finetune_vqa_randaug_seed0_from_vilt_vqa Global seed set to 0 INFO - lightning - Global seed set to 0 initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2 INFO - lightning - initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2 WARNING - root - Changed type of config entry "max_steps" from int to NoneType WARNING - ViLT - No observers have been added to this run INFO - ViLT - Running command 'main' INFO - ViLT - Started Global seed set to 0 INFO - lightning - Global seed set to 0 LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7] INFO - lightning - LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7] Using native 16bit precision. INFO - lightning - Using native 16bit precision. Global seed set to 0 INFO - lightning - Global seed set to 0 initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/2 INFO - lightning - initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/2 INFO - torch.distributed.distributed_c10d - Added key: store_based_barrier_key:1 to store for rank: 1 INFO - torch.distributed.distributed_c10d - Rank 1: Completed store-based barrier for 2 nodes. INFO - torch.distributed.distributed_c10d - Added key: store_based_barrier_key:1 to store for rank: 0 INFO - torch.distributed.distributed_c10d - Rank 0: Completed store-based barrier for 2 nodes. ERROR - ViLT - Failed after 0:00:21! ERROR - ViLT - Failed after 0:00:07! Traceback (most recent calls WITHOUT Sacred internals): File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/wrapt/wrappers.py", line 567, in call args, kwargs) File "run.py", line 73, in main trainer.test(model, datamodule=dm) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 755, in test results = self.test_given_model(model, test_dataloaders) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 820, in test_given_model results = self.fit(model) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 473, in fit results = self.accelerator_backend.train() File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 152, in train results = self.ddp_train(process_idx=self.task_idx, model=model) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 268, in ddp_train self.trainer.call_setup_hook(model) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 859, in call_setup_hook self.datamodule.setup(stage_name) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*args, *kwargs) File "/home/liqiang/ViLT/vilt/datamodules/multitask_datamodule.py", line 34, in setup dm.setup(stage) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(args, *kwargs) File "/home/liqiang/ViLT/vilt/datamodules/vqav2_datamodule.py", line 19, in setup super().setup(stage) File "/home/liqiang/ViLT/vilt/datamodules/datamodule_base.py", line 137, in setup self.set_train_dataset() File "/home/liqiang/ViLT/vilt/datamodules/datamodule_base.py", line 84, in set_train_dataset image_only=self.image_only, File "/home/liqiang/ViLT/vilt/datasets/vqav2_dataset.py", line 21, in init remove_duplicate=False, File "/home/liqiang/ViLT/vilt/datasets/base_dataset.py", line 53, in init self.table_names += [name] len(tables[i]) IndexError: list index out of range

Traceback (most recent calls WITHOUT Sacred internals): File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/wrapt/wrappers.py", line 567, in call args, kwargs) File "/home/liqiang/ViLT/run.py", line 73, in main trainer.test(model, datamodule=dm) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 755, in test results = self.test_given_model(model, test_dataloaders) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 820, in test_given_model results = self.fit(model) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 473, in fit results = self.accelerator_backend.train() File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 152, in train results = self.ddp_train(process_idx=self.task_idx, model=model) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 268, in ddp_train self.trainer.call_setup_hook(model) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 859, in call_setup_hook self.datamodule.setup(stage_name) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*args, *kwargs) File "/home/liqiang/ViLT/vilt/datamodules/multitask_datamodule.py", line 34, in setup dm.setup(stage) File "/root/anaconda3/envs/VILT/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(args, *kwargs) File "/home/liqiang/ViLT/vilt/datamodules/vqav2_datamodule.py", line 19, in setup super().setup(stage) File "/home/liqiang/ViLT/vilt/datamodules/datamodule_base.py", line 137, in setup self.set_train_dataset() File "/home/liqiang/ViLT/vilt/datamodules/datamodule_base.py", line 84, in set_train_dataset image_only=self.image_only, File "/home/liqiang/ViLT/vilt/datasets/vqav2_dataset.py", line 21, in init remove_duplicate=False, File "/home/liqiang/ViLT/vilt/datasets/base_dataset.py", line 53, in init self.table_names += [name] len(tables[i]) IndexError: list index out of range

dandelin commented 3 years ago

It seems the datasets are not in /home/ViLT/data/VQAv2/vqa_final_data. The error occurs when the arrow files are not in the data_root.

leonodelee commented 2 years ago

Thank you very much! I got the wrong VQA data preparation before.