Closed jwtowner closed 6 months ago
I had the same problem, so I solved it as follows.
In quantize_save line 174
lora_model.base_model.peft_config[
"default"
].base_model_name_or_path = args.model_name_or_path # modified
Thanks for pointing it out and providing the solutions. I have changed the code. It should be fine now.
Thanks for pointing it out and providing the solutions. I have changed the code. It should be fine now.
Thanks! Closing the issue.
Hi, when running quantize_save.py where it attempts to call
lora_model.save_pretrained(lora_model_dir)
, an OSError exception is now being thrown saying that the config.json file for the base model doesn't exist. I believe it should be a simple fix by having the script unwrap and save the base model and tokenizer first, moving the call tolora_model.save_pretrained()
to the end ofquantize_and_save()
. I assume the latest version of peft requires that the LoRA's base model exist on disk so it can look up the configuration.I'm just not sure if it's okay to save the LoRA after unwrapping the base model as it kind of changes the flow of the script? Thoughts? Thanks.
Package Versions: