haofanwang / Lora-for-Diffusers

The most easy-to-understand tutorial for using LoRA (Low-Rank Adaptation) within diffusers framework for AI Generation Researchers🔥
MIT License
739 stars 46 forks source link

SafetensorError: Error while deserializing header: HeaderTooLarge #9

Open ShaunXZ opened 1 year ago

ShaunXZ commented 1 year ago

Hi,

I am trying to convert lora from safetensor format to bin using the script in format_convert.py. The bin file was generated successfully, but it always throws HeaderTooLarge error when loading it. Could you please help? Thanks in advance!

image

Below is the script that gives the above error. Env: google colab.

# load diffusers model
model_id = "runwayml/stable-diffusion-v1-5"
pipeline = StableDiffusionPipeline.from_pretrained(model_id,torch_dtype=torch.float32)

# convert
# you have to download a suitable safetensors, not all is supported!
# download example from https://huggingface.co/SenY/LoRA/tree/main
# wget https://huggingface.co/SenY/LoRA/resolve/main/CheapCotton.safetensors
safetensor_path = "CheapCotton.safetensors"

bin_path = "CheapCotton.bin"
safetensors_to_bin(safetensor_path, bin_path)

# load it into UNet
# please note that diffusers' load_attn_procs only support add LoRA into attention
# if you have LoRA with other insertion, it does not support now
pipeline.unet.load_attn_procs(bin_path)
Pirog17000 commented 1 year ago

Got the same issue

haofanwang commented 1 year ago

@ShaunXZ @Pirog17000 This issue may help, I also met this problem before and I solved it by re-downloaded the model (please check whether the base model and safetensor are downloaded correctly, if the filesize is too small, it should be problematic). By the way, can you share your colab link so that I can take a look for you?

ShaunXZ commented 1 year ago

@haofanwang Thank you for your quick response. I double checked the downloaded safetensor file and it seems to have the right size (over 100Mb). Below is the colab used to the test this script: https://colab.research.google.com/drive/12wFobWFL_NZ64fOV0gEYXePzZMZlRpr_?usp=sharing

Thanks,

Pirog17000 commented 1 year ago

my issue is resolved with updating diffusers. since I run it locally, my steps were: pip uninstall diffusers pip install git+https://github.com/huggingface/diffusers.git

and no reinstall or update flags were helpful, straight-forward uninstall-install. no more issues, works well.

ShaunXZ commented 1 year ago

@Pirog17000 Hi, I tried your method in colab and it still didn't work... Could you take a look at the colab link above? Thank you!

tchanxx commented 1 year ago

+1 I'm also seeing this issue 😭 It's able to create the bin, but fails when running pipeline.unet.load_attn_procs(bin_path)

sanbuphy commented 1 year ago

+1 +1 +1 I'm also seeing this issue 😭 It's able to create the bin, but fails when running pipeline.unet.load_attn_procs(bin_path)

ksai2324 commented 1 year ago

According to: issue3367 pipeline.unet.load_attn_procs() takes the path where the .bin file is stored not the path to .bin file itself. Changing the input from"CheapCotton.bin" to "/PathToWhereItsStored" solved this error for me.

JaosonMa commented 1 year ago

i meet this issue, this is i slove it ,but i dont think my way is right: error where: lora_model_path = "./text_to_image/sddata/finetune/lora/pokemon/checkpoint-11000/pytorch_model.bin" pipe.unet.load_attn_procs(lora_model_path) error like yous , HeaderTooLarge then change lora_model_path = "./text_to_image/sddata/finetune/lora/pokemon/checkpoint-11000/" pipe.unet.load_attn_procs(lora_model_path) error : no file pytorch_lora_weights.bin then change cp pytorch_model.bin pytorch_lora_weights.bin the run the code lora_model_path = "./text_to_image/sddata/finetune/lora/pokemon/checkpoint-11000/" pipe.unet.load_attn_procs(lora_model_path) succ! why ? why ? why ?

FrancisDacian commented 1 year ago

same issue here, any solution yet?

FrancisDacian commented 1 year ago

I think ur solution is right

  1. you should mkdir a new folder called whatever u want
  2. rename the new bin file into pytorch_lora_weights.bin
  3. put the pytorch_lora_weights.bin into the new folder u just created
  4. pipe.unet.load_attn_procs(new_file_path)

and it will work

kkwhale7 commented 10 months ago

however, i met the error: File "/workspace/demo/Diffusion/models.py", line 301, in get_model return basic_unet.load_attn_procs(self.lora) File "/usr/local/lib/python3.8/dist-packages/diffusers/loaders.py", line 234, in load_attn_procs rank = value_dict["to_k_lora.down.weight"].shape[0] KeyError: 'to_k_lora.down.weight'