Open jayantkhannadocplix1 opened 1 year ago
When using PeftModel.from_pretrained
, you need to specify two things: the base model on which you want to put the adapters, as well as the weights of the adapters. In this case (assuming that you fine-tuned the model with lora), you are only specifying the adapter weights, not the base model. See here for the documentation: https://huggingface.co/docs/peft/package_reference/peft_model#peft.PeftModel.from_pretrained
Hello @hkonsti @justusmattern27
I've successfully fine-tuned the Llama2-13b-chat-hf model using Llamatune. The fine-tuning process went well, and I was able to fine-tune my model. However, when attempting to run the fine-tuned model using PEFT, I encountered the following error:
Here is my code that i used to run the model:
It appears that I'm encountering an issue with the from_pretrained function in the PeftModel class, where it's expecting a 'model_id' argument. I'm uncertain about how to resolve this issue. Any insights or suggestions would be greatly appreciated.