[X] I have read the README and searched the existing issues.
Reproduction
运行web api的测试时报错。提示jieba not defined 但已经安装过jieba。 运行的Base model是llama2 7b-hf,adapater是alpaca 52k微调后的。
[INFO|modeling_utils.py:4000] 2024-03-16 21:14:53,310 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at meta-llama/Llama-2-7b-hf.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training.
[INFO|configuration_utils.py:800] 2024-03-16 21:14:53,348 >> loading configuration file generation_config.json from cache at /home/ubuntu/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-hf/snapshots/8cca527612d856d7d32bd94f8103728d614eb852/generation_config.json
[INFO|configuration_utils.py:845] 2024-03-16 21:14:53,348 >> Generate config GenerationConfig {
"bos_token_id": 1,
"do_sample": true,
"eos_token_id": 2,
"max_length": 4096,
"pad_token_id": 0,
"temperature": 0.6,
"top_p": 0.9
}
03/16/2024 21:14:53 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA
03/16/2024 21:14:53 - INFO - llmtuner.model.adapter - Merged 1 adapter(s).
03/16/2024 21:14:53 - INFO - llmtuner.model.adapter - Loaded adapter(s): saves/LLaMA2-7B/lora/train_lora_52k
03/16/2024 21:14:53 - INFO - llmtuner.model.loader - all params: 6738415616
[INFO|trainer.py:3376] 2024-03-16 21:14:53,515 >> Running Evaluation
[INFO|trainer.py:3378] 2024-03-16 21:14:53,515 >> Num examples = 10
[INFO|trainer.py:3381] 2024-03-16 21:14:53,515 >> Batch size = 8
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.62s/it]Exception in thread Thread-27 (run_exp):
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/llama_factory/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/home/ubuntu/anaconda3/envs/llama_factory/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, self._kwargs)
File "/home/ubuntu/LLaMA-Factory/src/llmtuner/train/tuner.py", line 32, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/home/ubuntu/LLaMA-Factory/src/llmtuner/train/sft/workflow.py", line 83, in run_sft
metrics = trainer.evaluate(metric_key_prefix="eval", gen_kwargs)
File "/home/ubuntu/anaconda3/envs/llama_factory/lib/python3.10/site-packages/transformers/trainer_seq2seq.py", line 166, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/home/ubuntu/anaconda3/envs/llama_factory/lib/python3.10/site-packages/transformers/trainer.py", line 3229, in evaluate
output = eval_loop(
File "/home/ubuntu/anaconda3/envs/llama_factory/lib/python3.10/site-packages/transformers/trainer.py", line 3520, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "/home/ubuntu/LLaMA-Factory/src/llmtuner/train/sft/metric.py", line 45, in call
hypothesis = list(jieba.cut(pred))
NameError: name 'jieba' is not defined
Reminder
Reproduction
运行web api的测试时报错。提示jieba not defined 但已经安装过jieba。 运行的Base model是llama2 7b-hf,adapater是alpaca 52k微调后的。
[INFO|modeling_utils.py:4000] 2024-03-16 21:14:53,310 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at meta-llama/Llama-2-7b-hf. If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. [INFO|configuration_utils.py:800] 2024-03-16 21:14:53,348 >> loading configuration file generation_config.json from cache at /home/ubuntu/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-hf/snapshots/8cca527612d856d7d32bd94f8103728d614eb852/generation_config.json [INFO|configuration_utils.py:845] 2024-03-16 21:14:53,348 >> Generate config GenerationConfig { "bos_token_id": 1, "do_sample": true, "eos_token_id": 2, "max_length": 4096, "pad_token_id": 0, "temperature": 0.6, "top_p": 0.9 }
03/16/2024 21:14:53 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA 03/16/2024 21:14:53 - INFO - llmtuner.model.adapter - Merged 1 adapter(s). 03/16/2024 21:14:53 - INFO - llmtuner.model.adapter - Loaded adapter(s): saves/LLaMA2-7B/lora/train_lora_52k 03/16/2024 21:14:53 - INFO - llmtuner.model.loader - all params: 6738415616 [INFO|trainer.py:3376] 2024-03-16 21:14:53,515 >> Running Evaluation [INFO|trainer.py:3378] 2024-03-16 21:14:53,515 >> Num examples = 10 [INFO|trainer.py:3381] 2024-03-16 21:14:53,515 >> Batch size = 8 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.62s/it]Exception in thread Thread-27 (run_exp): Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/llama_factory/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/home/ubuntu/anaconda3/envs/llama_factory/lib/python3.10/threading.py", line 953, in run self._target(*self._args, self._kwargs) File "/home/ubuntu/LLaMA-Factory/src/llmtuner/train/tuner.py", line 32, in run_exp run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) File "/home/ubuntu/LLaMA-Factory/src/llmtuner/train/sft/workflow.py", line 83, in run_sft metrics = trainer.evaluate(metric_key_prefix="eval", gen_kwargs) File "/home/ubuntu/anaconda3/envs/llama_factory/lib/python3.10/site-packages/transformers/trainer_seq2seq.py", line 166, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/home/ubuntu/anaconda3/envs/llama_factory/lib/python3.10/site-packages/transformers/trainer.py", line 3229, in evaluate output = eval_loop( File "/home/ubuntu/anaconda3/envs/llama_factory/lib/python3.10/site-packages/transformers/trainer.py", line 3520, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) File "/home/ubuntu/LLaMA-Factory/src/llmtuner/train/sft/metric.py", line 45, in call hypothesis = list(jieba.cut(pred)) NameError: name 'jieba' is not defined
Expected behavior
No response
System Info
transformers
version: 4.38.2Others
No response