Closed wphtrying closed 2 days ago
可以使用这个yaml吗。export_device以及template应该填什么
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct adapter_name_or_path: saves/llama3-8b/lora/sft template: llama3 finetuning_type: lora
export_dir: models/llama3_lora_sft export_size: 2 export_device: cpu export_legacy_format: false
CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
No response
可以
Reminder
System Info
可以使用这个yaml吗。export_device以及template应该填什么
Note: DO NOT use quantized model or quantization_bit when merging lora adapters
model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct adapter_name_or_path: saves/llama3-8b/lora/sft template: llama3 finetuning_type: lora
export
export_dir: models/llama3_lora_sft export_size: 2 export_device: cpu export_legacy_format: false
Reproduction
CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
Expected behavior
No response
Others
No response