Open azamatolegen opened 4 years ago
the code I am running is:
CUDA_VISIBLE_DEVICES=0,1,2,3 python run_classifier_TABSA.py --task_name semeval_NLI_M --data_dir data/semeval2014/bert-pair/ --vocab_file mBERT/vocab.txt --bert_config_file mBERT/config.json --init_checkpoint mBERT/pytorch_model.bin --eval_test --do_lower_case --max_seq_length 128 --train_batch_size 4 --learning_rate 5e-5 --num_train_epochs 2.0 --output_dir results/semeavl2014/NLI_M --seed 42
Maybe you should check the differences between mBERT and BERT-base and modify the 'convert_tf_checkpoint_to_pytorch.py' file if necessary.
GPU memory related issues reference: https://github.com/HSLCY/ABSA-BERT-pair/issues/1
Great article! Thank you for publishing the code!
I am following your steps and on step 3 when I run the code for semeval_NLI_M I got this error:
could you please help to figure this out? I am using mBERT model.