echonoshy / cgft-llm

Practice to LLM.
MIT License
456 stars 74 forks source link

使用模型对话出现报错,请指教 #5

Closed xiang-hui744 closed 4 months ago

xiang-hui744 commented 4 months ago

llamafactory-cli webchat cust/train_llama3_lora_sft.yaml和llamafactory-cli chat cust/train_llama3_lora_sft.yaml命令出现ValueError: Some keys are not used by the HfArgumentParser: ['do_train', 'fp16', 'gradient_accumulation_steps', 'learning_rate', 'logging_steps', 'lr_scheduler_type', 'max_grad_norm', 'num_train_epochs', 'optim', 'output_dir', 'per_device_train_batch_size', 'report_to', 'save_steps', 'warmup_steps']的问题

echonoshy commented 4 months ago

train_llama3_lora_sft.yaml 这个是用来训练(微调)的, 不是用来对话的。

参考这个(新建一个yaml配置文件,然后把内容改成具体的路径): model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct adapter_name_or_path: saves/llama3-8b/lora/sft template: llama3 finetuning_type: lora