Open wuguangshuo opened 1 year ago
不知道大佬有没有遇到ValueError: paged_adamw_32bit is not a valid OptimizerNames这个错误
你这个是包的版本的问题,
pip install -q -U bitsandbytes
pip install -q -U git+https://github.com/huggingface/transformers.git
pip install -q -U git+https://github.com/huggingface/peft.git
pip install -q -U git+https://github.com/huggingface/accelerate.git
之前在qlora提的问题勘验: To summarize all the issues:
- lora weights are not saved correctly : Comment out the following code
# if args.bits < 16:
# old_state_dict = model.state_dict
# model.state_dict = (
# lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict())
# ).__get__(model, type(model))
pip install -U git+https://github.com/huggingface/peft.git
model = model.merge_and_unload()
model = AutoModel.from_pretrained(args["model_dir"],
trust_remote_code=True,
load_in_4bit=True,
device_map={"":0})
model = PeftModel.from_pretrained(model, args["save_dir"], trust_remote_code=True)
model.cuda().eval() <- DO NOT ADD THIS
@wuguangshuo 请问你解决了吗?
不知道大佬有没有遇到ValueError: paged_adamw_32bit is not a valid OptimizerNames这个错误