Open ShaunXZ opened 1 year ago
Got the same issue
@ShaunXZ @Pirog17000 This issue may help, I also met this problem before and I solved it by re-downloaded the model (please check whether the base model and safetensor are downloaded correctly, if the filesize is too small, it should be problematic). By the way, can you share your colab link so that I can take a look for you?
@haofanwang Thank you for your quick response. I double checked the downloaded safetensor file and it seems to have the right size (over 100Mb). Below is the colab used to the test this script: https://colab.research.google.com/drive/12wFobWFL_NZ64fOV0gEYXePzZMZlRpr_?usp=sharing
Thanks,
my issue is resolved with updating diffusers. since I run it locally, my steps were:
pip uninstall diffusers
pip install git+https://github.com/huggingface/diffusers.git
and no reinstall or update flags were helpful, straight-forward uninstall-install. no more issues, works well.
@Pirog17000 Hi, I tried your method in colab and it still didn't work... Could you take a look at the colab link above? Thank you!
+1 I'm also seeing this issue 😭 It's able to create the bin, but fails when running pipeline.unet.load_attn_procs(bin_path)
+1 +1 +1 I'm also seeing this issue 😭 It's able to create the bin, but fails when running pipeline.unet.load_attn_procs(bin_path)
According to: issue3367 pipeline.unet.load_attn_procs()
takes the path where the .bin file is stored not the path to .bin file itself. Changing the input from"CheapCotton.bin"
to "/PathToWhereItsStored"
solved this error for me.
i meet this issue, this is i slove it ,but i dont think my way is right: error where: lora_model_path = "./text_to_image/sddata/finetune/lora/pokemon/checkpoint-11000/pytorch_model.bin" pipe.unet.load_attn_procs(lora_model_path) error like yous , HeaderTooLarge then change lora_model_path = "./text_to_image/sddata/finetune/lora/pokemon/checkpoint-11000/" pipe.unet.load_attn_procs(lora_model_path) error : no file pytorch_lora_weights.bin then change cp pytorch_model.bin pytorch_lora_weights.bin the run the code lora_model_path = "./text_to_image/sddata/finetune/lora/pokemon/checkpoint-11000/" pipe.unet.load_attn_procs(lora_model_path) succ! why ? why ? why ?
same issue here, any solution yet?
I think ur solution is right
and it will work
however, i met the error: File "/workspace/demo/Diffusion/models.py", line 301, in get_model return basic_unet.load_attn_procs(self.lora) File "/usr/local/lib/python3.8/dist-packages/diffusers/loaders.py", line 234, in load_attn_procs rank = value_dict["to_k_lora.down.weight"].shape[0] KeyError: 'to_k_lora.down.weight'
Hi,
I am trying to convert lora from safetensor format to bin using the script in format_convert.py. The bin file was generated successfully, but it always throws HeaderTooLarge error when loading it. Could you please help? Thanks in advance!
Below is the script that gives the above error. Env: google colab.