PaddlePaddle / PaddleNLP

👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
https://paddlenlp.readthedocs.io
Apache License 2.0
11.73k stars 2.86k forks source link

[Question]: 在执行微调零样本分类脚本时出现的问题 #7140

Closed Dhaizei closed 9 months ago

Dhaizei commented 9 months ago

请提出你的问题

执行脚本 python run_train.py \ --device gpu \ --logging_steps 10 \ --save_steps 100 \ --eval_steps 100 \ --seed 1000 \ --model_name_or_path utc-base \ --output_dir ./checkpoint/model_best \ --dataset_path ./data/ \ --max_seq_length 512 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 2 \ --gradient_accumulation_steps 8 \ --num_train_epochs 20 \ --learning_rate 1e-5 \ --do_train \ --do_eval \ --do_export \ --export_model_dir ./checkpoint/model_best \ --overwrite_output_dir \ --disable_tqdm True \ --metric_for_best_model macro_f1 \ --load_best_model_at_end True \ --save_total_limit 1 \ --save_plm

metrics: {'eval_runtime': 0.019, 'eval_samples_per_second': 0.0, 'eval_steps_per_second': 0.0, 'epoch': 0.6144} metric_to_check: eval_macro_f1 Traceback (most recent call last): File "D:\work\test\PaddleNLP\applications\zero_shot_text_classification\run_train.py", line 154, in main() File "D:\work\test\PaddleNLP\applications\zero_shot_text_classification\run_train.py", line 133, in main train_results = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint) File "D:\software\anaconda3\envs\nlp\lib\site-packages\paddlenlp\trainer\trainer.py", line 888, in train self._maybe_log_save_evaluate(tr_loss, model, epoch, ignore_keys_for_eval, inputs=inputs) File "D:\software\anaconda3\envs\nlp\lib\site-packages\paddlenlp\trainer\trainer.py", line 1065, in _maybe_log_save_evaluate self._save_checkpoint(model, metrics=metrics) File "D:\software\anaconda3\envs\nlp\lib\site-packages\paddlenlp\trainer\trainer.py", line 1842, in _save_checkpoint metric_value = metrics[metric_to_check] KeyError: 'eval_macro_f1'

Dhaizei commented 9 months ago

反正我是把metric_value =0.9,进行赋值了,最后并没有印象到模型的训练,就是不知道这个metrics在哪里计算的宏观f1,反正是没有,也没有找到,气死。

luoruijie commented 7 months ago

加了用fp16精度来训练模型

python run_train.py --device gpu --logging_steps 10 --save_steps 100 --eval_steps 100 --seed 1000 --model_name_or_path utc-xbase --output_dir ./checkpoint/model_best_12 --dataset_path ./data/data_12 --max_seq_length 512 --per_device_train_batch_size 2 --per_device_eval_batch_size 2 --gradient_accumulation_steps 8 --num_train_epochs 20 --learning_rate 1e-5 --do_train --do_eval --do_export --export_model_dir ./checkpoint/model_best_12 --overwrite_output_dir --disable_tqdm True --metric_for_best_model macro_f1 --load_best_model_at_end True --save_total_limit 1 --save_plm --fp16 True

反正我是把metric_value =0.9,进行赋值了,最后并没有印象到模型的训练,就是不知道这个metrics在哪里计算的宏观f1,反正是没有,也没有找到,气死。

试一下这个兄弟:

加了用fp16精度来训练模型

python run_train.py --device gpu --logging_steps 10 --save_steps 100 --eval_steps 100 --seed 1000 --model_name_or_path utc-xbase --output_dir ./checkpoint/model_best_12 --dataset_path ./data/data_12 --max_seq_length 512 --per_device_train_batch_size 2 --per_device_eval_batch_size 2 --gradient_accumulation_steps 8 --num_train_epochs 20 --learning_rate 1e-5 --do_train --do_eval --do_export --export_model_dir ./checkpoint/model_best_12 --overwrite_output_dir --disable_tqdm True --metric_for_best_model macro_f1 --load_best_model_at_end True --save_total_limit 1 --save_plm --fp16 True