👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
Traceback (most recent call last):
File "/home/PaddleNLP/llm/llama/run_pretrain_fc.py", line 549, in
main()
File "/home/PaddleNLP/llm/llama/run_pretrain_fc.py", line 544, in main
test_ret = trainer.predict(test_dataset)
File "/home/PaddleNLP/paddlenlp/trainer/trainer.py", line 2139, in predict
output = eval_loop(
File "/home/PaddleNLP/paddlenlp/trainer/trainer.py", line 2011, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/PaddleNLP/paddlenlp/trainer/trainer.py", line 2229, in prediction_step
return self.prediction_pipeline_step(model, inputs, prediction_loss_only, ignore_keys)
File "/home/PaddleNLP/paddlenlp/trainer/trainer.py", line 2188, in prediction_pipeline_step
loss = model.eval_batch([inputs, labels], compute_loss=True)
File "/root/anaconda3/lib/python3.9/site-packages/paddle/nn/layer/layers.py", line 1474, in getattr
return object.getattribute(self, name)
AttributeError: 'LlamaForCausalLMPipe' object has no attribute 'eval_batch'
请提出你的问题
自己尝试llama做predict,遇到问题 python -u -m paddle.distributed.launch \ --gpus "6,7" \ --log_dir "output/$task_name""_log" \ run_pretrain.py \ --model_type "llama" \ --model_name_or_path "facebook/llama-13b" \ --tokenizer_name_or_path "facebook/llama-13b" \ --input_dir "./data" \ --output_dir "output/$task_name" \ --split 949,50,1 \ --max_seq_length 2048 \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 4 \ --per_device_eval_batch_size 4 \ --scale_loss 512 \ --tensor_parallel_degree 1 \ --pipeline_parallel_degree 2 \ --virtual_pp_degree 1 \ --sequence_parallel 0 \ --learning_rate 0.00001 \ --min_learning_rate 0.000001 \ --max_steps 10000 \ --save_steps 5000 \ --weight_decay 0.01 \ --warmup_ratio 0.01 \ --max_grad_norm 1.0 \ --logging_steps 10 \ --dataloader_num_workers 1 \ --eval_steps 1000 \ --report_to "visualdl" \ --sharding "stage1" \ --disable_tqdm true \ --continue_training 1\ --recompute 1 \ --do_predict \ --device "gpu"
Traceback (most recent call last): File "/home/PaddleNLP/llm/llama/run_pretrain_fc.py", line 549, in
main()
File "/home/PaddleNLP/llm/llama/run_pretrain_fc.py", line 544, in main
test_ret = trainer.predict(test_dataset)
File "/home/PaddleNLP/paddlenlp/trainer/trainer.py", line 2139, in predict
output = eval_loop(
File "/home/PaddleNLP/paddlenlp/trainer/trainer.py", line 2011, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/PaddleNLP/paddlenlp/trainer/trainer.py", line 2229, in prediction_step
return self.prediction_pipeline_step(model, inputs, prediction_loss_only, ignore_keys)
File "/home/PaddleNLP/paddlenlp/trainer/trainer.py", line 2188, in prediction_pipeline_step
loss = model.eval_batch([inputs, labels], compute_loss=True)
File "/root/anaconda3/lib/python3.9/site-packages/paddle/nn/layer/layers.py", line 1474, in getattr
return object.getattribute(self, name)
AttributeError: 'LlamaForCausalLMPipe' object has no attribute 'eval_batch'