Closed zlh-source closed 1 week ago
llamafactory-cli train \ --stage sft \ --do_predict \ --model_name_or_path ${save_model} \ --eval_dataset ${eval_dataset} \ --dataset_dir ./data \ --template empty \ --finetuning_type full \ --output_dir ${pred_path} \ --overwrite_cache \ --overwrite_output_dir \ --cutoff_len 2048 \ --preprocessing_num_workers 16 \ --per_device_eval_batch_size 16 \ --predict_with_generate \ --do_sample \ --top_k 50 \ --top_p ${top_p} \ --temperature ${temperature}
None
No response
请问SFT之后的模型在推理的时候,是否可以返回多个response? 功能类似于huggingface generate函数里的num_return_sequences参数。
使用 llamafactory-cli api
Reminder
System Info
llamafactory-cli train \ --stage sft \ --do_predict \ --model_name_or_path ${save_model} \ --eval_dataset ${eval_dataset} \ --dataset_dir ./data \ --template empty \ --finetuning_type full \ --output_dir ${pred_path} \ --overwrite_cache \ --overwrite_output_dir \ --cutoff_len 2048 \ --preprocessing_num_workers 16 \ --per_device_eval_batch_size 16 \ --predict_with_generate \ --do_sample \ --top_k 50 \ --top_p ${top_p} \ --temperature ${temperature}
Reproduction
None
Expected behavior
No response
Others
请问SFT之后的模型在推理的时候,是否可以返回多个response? 功能类似于huggingface generate函数里的num_return_sequences参数。