After training with DPOTrainer of trl, and saving, loading error when using AutoPeftModelForCausalLM
After training with DPOTrainer of trl, I saved it locally as below and then loaded it with AutoPeftModelForCausalLM, and an error came out. When I load a checkpoint stored locally with SFTTranier in the same way, I don't get an error. I put token (no problem) and something else. but I keep getting the same error. I ask for the help. Thank you.
Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct repo_id and repo_type.
If you are trying to access a private or gated repo, make sure you are authenticated.
The above exception was the direct cause of the following exception:
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with huggingface-cli login or by passing token=<your_token>
Please help me if you know how to resolve the error Thank you.
After training with DPOTrainer of trl, and saving, loading error when using AutoPeftModelForCausalLM
After training with DPOTrainer of trl, I saved it locally as below and then loaded it with AutoPeftModelForCausalLM, and an error came out. When I load a checkpoint stored locally with SFTTranier in the same way, I don't get an error. I put token (no problem) and something else. but I keep getting the same error. I ask for the help. Thank you.
os.listdir("./model/dpo_results/final_checkpoint")
Please help me if you know how to resolve the error Thank you.
※ PS, I used "facebook/opt-350m" as base_model. and I followed this guide faithfully (https://github.com/mzbac/llama2-fine-tune)`