Open mj2688 opened 12 months ago
I'm currently having the same problem. Are you using a well known dataset (such as alpaca) or a custom one? @mj2688 By the way I noticed that with few epochs this doesn't happen.
I'm currently having the same problem. Are you using a well known dataset (such as alpaca) or a custom one? @mj2688 By the way I noticed that with few epochs this doesn't happen. I use the same dataset in the example code(finetune.py).
data_path: str = "yahma/alpaca-cleaned"
I referred to this tutorial and deleted 'torch.compile' in finetune.py , but it still not works. https://github.com/huggingface/transformers/issues/27397
You can find the fix reported in this issue. This solved the InvalidHeaderDeserialization error for me.
Have u fix this problem? I'm current facing the same?
Yes, I solved. You have to comment these lines in finetune.py. The reason is that currently there is an incompatibility between PyTorch and PEFT library as reported here.
Yes, I solved. You have to comment these lines in finetune.py. The reason is that currently there is an incompatibility between PyTorch and PEFT library as reported here.
thanks,i also solved!
Have u fix this problem? I'm current facing the same?
delete some codes in finetune.py : model.statedict = ( lambda self, *, **: get_peft_model_state_dict( self, old_state_dict() ) ).get__(model, type(model))
if torch.__version__ >= "2" and sys.platform != "win32":
model = torch.compile(model)
Have u fix this problem? I'm current facing the same?
delete some codes in finetune.py : model.statedict = ( lambda self, *, __: get_peft_model_state_dict( self, old_state_dict() ) ).get**(model, type(model))
if torch.__version__ >= "2" and sys.platform != "win32": model = torch.compile(model)
But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
Have u fix this problem? I'm current facing the same?
delete some codes in finetune.py : model.statedict = ( lambda self, *, __: get_peft_model_state_dict( self, old_state_dict() ) ).get**(model, type(model))
if torch.__version__ >= "2" and sys.platform != "win32": model = torch.compile(model)
But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
Hi. I think the .safetensors file is not compatible with PEFT,so I delete the xx.safetensors file and it works.
Have u fix this problem? I'm current facing the same?
delete some codes in finetune.py : model.statedict = ( lambda self, *, __: get_peft_model_state_dict( self, old_state_dict() ) ).get**(model, type(model))
if torch.__version__ >= "2" and sys.platform != "win32": model = torch.compile(model)
But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
Hi. I think the .safetensors file is not compatible with PEFT,so I delete the xx.safetensors file and it works.
Do you mean to delete the file after fine-tuning and run it now.Is this file an adapter_ model. safetensors
Have u fix this problem? I'm current facing the same?
delete some codes in finetune.py : model.statedict = ( lambda self, *, __: get_peft_model_state_dict( self, old_state_dict() ) ).get**(model, type(model))
if torch.__version__ >= "2" and sys.platform != "win32": model = torch.compile(model)
But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
Hi. I think the .safetensors file is not compatible with PEFT,so I delete the xx.safetensors file and it works.
Do you mean to delete the file after fine-tuning and run it now.Is this file an adapter_ model. safetensors
Before fine-tuning,i delete this,then it works: model.statedict = ( lambda self, *, **__: get_peft_model_state_dict( self, old_state_dict() ) ).get(model, type(model))
if torch.version >= "2" and sys.platform != "win32": model = torch.compile(model)
Have u fix this problem? I'm current facing the same?
delete some codes in finetune.py : model.statedict = ( lambda self, *, __: get_peft_model_state_dict( self, old_state_dict() ) ).get**(model, type(model))
if torch.__version__ >= "2" and sys.platform != "win32": model = torch.compile(model)
But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
Hi. I think the .safetensors file is not compatible with PEFT,so I delete the xx.safetensors file and it works.
Do you mean to delete the file after fine-tuning and run it now.Is this file an adapter_ model. safetensors
Before fine-tuning,i delete this,then it works: model.statedict = ( lambda self, *, **__: get_peft_model_state_dict( self, old_state_dict() ) ).get(model, type(model))
if torch.version >= "2" and sys.platform != "win32": model = torch.compile(model)
I've tried it before, but it still doesn't work
I'm having this same issue (details here: https://github.com/huggingface/transformers/issues/28742). Could anyone please help?
Have u fix this problem? I'm current facing the same?
delete some codes in finetune.py : model.statedict = ( lambda self, *, __: get_peft_model_state_dict( self, old_state_dict() ) ).get**(model, type(model))
if torch.__version__ >= "2" and sys.platform != "win32": model = torch.compile(model)
But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
Hi. I think the .safetensors file is not compatible with PEFT,so I delete the xx.safetensors file and it works.
@MING8276 Would you mind telling me what files you deleted?
Have u fix this problem? I'm current facing the same?
delete some codes in finetune.py : model.statedict = ( lambda self, *, __: get_peft_model_state_dict( self, old_state_dict() ) ).get**(model, type(model))
if torch.__version__ >= "2" and sys.platform != "win32": model = torch.compile(model)
But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
do you solve this problem? i meet the same problem.
When I fine-tune llama2-7B with LoRa, the following error occurs: Traceback (most recent call last): File "/home/ubuntu/lora/alpaca-lora-main/finetune.py", line 290, in
fire.Fire(train)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, kwargs)
File "/home/ubuntu/lora/alpaca-lora-main/finetune.py", line 280, in train
trainer.train()
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/transformers/trainer.py", line 1555, in train
return inner_training_loop(
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/transformers/trainer.py", line 1965, in _inner_training_loop
self._load_best_model()
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/transformers/trainer.py", line 2184, in _load_best_model
model.load_adapter(self.state.best_model_checkpoint, model.active_adapter)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/peft/peft_model.py", line 629, in load_adapter
adapters_weights = load_peft_weights(model_id, device=torch_device, hf_hub_download_kwargs)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/peft/utils/save_and_load.py", line 222, in load_peft_weights
adapters_weights = safe_load_file(filename, device=device)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/safetensors/torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
And in checkpoint-1000, adapter_model.safetensors is saved in the .safetensors format. I checked the official fine-tuning weights, and they are in the adapter_model.bin format. Why is that?