Closed Tizzzzy closed 2 weeks ago
Hello, I have noticed that there is currently no ’local_inference‘ folder. How can I infer this
Hi! A interactive way is to do local inference by . For more details please check out local inference recipe readme or model servers readme
python recipes/inference/local_inference/inference.py --model_name meta-llama/Meta-Llama-3-8B-Instruct --peft_model [PEFT_MODEL_FOLDER]
We just had a update on our folder structures, now the it has been change to recipes/quickstart/inference/local_inference/inference.py
. Sorry for this trouble.
yeah, I get. Thanks.
🚀 The feature, motivation and pitch
I am new to llama-recipes. Right now I have finetuned a llama3 model based on "openbookqa" dataset. It store a model for me in this path:
/research/cbim/medical/lh599/research/ruijiang/Dong/llama-recipes/PATH/to/save/PEFT/model
. In this model folder, there are three files:adapter_config.json
,adapter_model.safetensors
,README.md
.My question is how can I test this fine tuned model. For example, I can pass a question like: "The sun is responsible for?". And my model will give me an answer.
Alternatives
No response
Additional context
No response