liyucheng09 / MetaphorFrame

FrameBERT: Conceptual Metaphor Detection with Frame Embedding Learning. Presented at EACL 2023.
23 stars 2 forks source link

Unable to run any of the model_type in ["MELBERT_MIP", "MELBERT", "FrameMelbert"]: (line 322 /MetaphorFrame-main/main.py) #7

Open saishivakalyan opened 10 months ago

saishivakalyan commented 10 months ago

bash-4.4$ python main.py --data_dir data/VUA20 --task_name vua --model_type MELBERT --class_weight 3 --bert_model roberta-base --num_train_epoch 3 --train_batch_size 32 --learning_rate 3e-5 --warmup_epoch 2 /scratch/schalla7/Project/project_env/lib/python3.9/site-packages/torch/cuda/init.py:740: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11050). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) return torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count ['--data_dir', 'data/VUA20', '--task_name', 'vua', '--model_type', 'MELBERT', '--class_weight', '3', '--bert_model', 'roberta-base', '--num_train_epoch', '3', '--train_batch_size', '32', '--learning_rate', '3e-5', '--warmup_epoch', '2'] main config saved in /scratch/schalla7/Project/MetaphorFrame-main/checkpoints/framebert/roberta-base/9_20231121-0935 device: cpu n_gpu: 0 Some weights of RobertaModel were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. /scratch/schalla7/Project/project_env/lib/python3.9/site-packages/transformers/optimization.py:411: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set no_deprecation_warning=True to disable this warning warnings.warn( Running training Batch size = 32 Num steps = 15015 Epoch: 0%| You're using a RobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding. /scratch/schalla7/Project/project_env/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2624: UserWarning: max_length is ignored when padding=True and there is no truncation strategy. To pad to max length, use padding='max_length'. warnings.warn( Iteration: 0it [00:00, ?it/s] Epoch: 0%| | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "/scratch/schalla7/Project/MetaphorFrame-main/main.py", line 602, in main() File "/scratch/schalla7/Project/MetaphorFrame-main/main.py", line 129, in main model, best_result = run_train( File "/scratch/schalla7/Project/MetaphorFrame-main/main.py", line 324, in run_train logits, frame_loss = model( File "/scratch/schalla7/Project/project_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/scratch/schalla7/Project/project_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, **kwargs) TypeError: forward() got an unexpected keyword argument 'frame_input_ids'

liyucheng09 commented 10 months ago

Now look into it.

liyucheng09 commented 10 months ago

Cannot reproduce your error. Trying to figure out how this happen now..

liyucheng09 commented 10 months ago

This error is due to the the default config multitask = True in main_config.cfg.

Multitask if for FrameBERT only. As you were trying to use MelBERT, with multitask = true is the reason for this error. So I have set multitask to False defautly.

Try update your folder and run it again.