LinyangLee / Token-Aware-VAT

Code for our AAAI2021 paper: Token-Aware Virtual Adversarial Training For Language Understanding.
26 stars 6 forks source link

Model name 'outputs/rte_tavat' not found #2

Closed Alicebash closed 2 years ago

Alicebash commented 3 years ago

Hello, I ran into a problem when running the following line of code ”CUDA_VISIBLE_DEVICES=5 python token_vat.py --model_type bert --model_name_or_path bert-base-uncased --do_lower_case --learning_rate 2e-5 --do_train --task_name rte --data_dir data/RTE/ --output_dir outputs/rte_tavat --overwrite_output_dir --max_seq_length 512 --save_steps 50 --logging_steps 50 --evaluate_during_training --per_gpu_train_batch_size 8 --warmup_steps 30 --num_train_epochs 9 --adv_lr 2e-2 --adv_init_mag 1.6e-1 --adv_max_norm 1.4e-1 --adv_steps 2 --vocab_size 30522 --hidden_size 768 --adv_train 1 --gradient_accumulation_steps 1“

The error message is as follows.Could you tell me if there is a problem with the configuration or the steps? How can I solve it?: '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' 06/26/2021 16:47:53 - INFO - transformers.tokenization_utils - Model name 'outputs/rte_tavat' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). Assuming 'outputs/rte_tavat' is a path, a model identifier, or url to a directory containing tokenizer files. 06/26/2021 16:47:53 - INFO - transformers.tokenization_utils - Didn't find file outputs/rte_tavat/added_tokens.json. We won't load it. 06/26/2021 16:47:53 - INFO - transformers.tokenization_utils - loading file outputs/rte_tavat/vocab.txt 06/26/2021 16:47:53 - INFO - transformers.tokenization_utils - loading file None 06/26/2021 16:47:53 - INFO - transformers.tokenization_utils - loading file outputs/rte_tavat/special_tokens_map.json 06/26/2021 16:47:53 - INFO - transformers.tokenization_utils - loading file outputs/rte_tavat/tokenizer_config.json ''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''

LinyangLee commented 3 years ago

hi sorry about the late response! It should be working since it is based on huggingface transformers run_glue.py script. Do you have a problem running the run_glue script? Feel free to contact me!