Closed athenawisdoms closed 1 year ago
Tested on colab.
diffuers - install from git+
transfoermers - install from git+
safetensors==0.3.1
pipe = StableDiffusionPipeline.from_pretrained(
'runwayml/stable-diffusion-v1-5',
torch_dtype=torch.float16
).to("cuda")
pt_state_dict = safetensors.torch.load_file(
"/content/epi_noiseoffset2.safetensors", device="cpu"
)
torch.save(pt_state_dict, "pt_state_dict.bin")
# OK
pipe.unet.load_attn_procs("/content/pt_state_dict.bin")
# NOT OK
---------------------------------------------------------------------------
SafetensorError Traceback (most recent call last)
[<ipython-input-25-6d3e73336a70>](https://localhost:8080/#) in <cell line: 1>()
----> 1 pipe.unet.load_attn_procs("/content/pt_state_dict.bin")
1 frames
[/usr/local/lib/python3.10/dist-packages/safetensors/torch.py](https://localhost:8080/#) in load_file(filename, device)
257 """
258 result = {}
--> 259 with safe_open(filename, framework="pt", device=device) as f:
260 for k in f.keys():
261 result[k] = f.get_tensor(k)
SafetensorError: Error while deserializing header: HeaderTooLarge
Maybe you can try this script but not sure if it does the exact job. script
!python /content/diffusers/scripts/convert_lora_safetensor_to_diffusers.py \
--base_model_path 'runwayml/stable-diffusion-v1-5' \
--checkpoint_path /content/epi_noiseoffset2.safetensors \
--dump_path epi_noiseoffset_diffusers
pipe1 = StableDiffusionPipeline.from_pretrained(
'/content/epi_noiseoffset_diffusers',
torch_dtype=torch.float16
).to("cuda")
pipe2 = StableDiffusionPipeline.from_pretrained(
'runwayml/stable-diffusion-v1-5',
torch_dtype=torch.float16
).to("cuda")
generator = torch.Generator(device="cuda").manual_seed(0)
prompt = "a photo of an astronaut riding a horse on mars"
image1 = pipe1(prompt, generator=generator).images[0] # OK
image2 = pipe2(prompt, generator=generator).images[0] # OK
If you're simply trying to use them with diffusers, you can repurpose the convert
function in convert_lora_safetensor_to_diffusers.py
to apply the .safetensors lora at runtime instead of converting. This would allow you to use the lora with any model with an alpha set at runtime.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
import safetensors
import torch
pt_state_dict = safetensors.torch.load_file(lora_model_path, device="gpu")
torch.save(pt_state_dict, "pt_state_dict.bin")
AttributeError Traceback (most recent call last)
[<ipython-input-3-a1c5b44cd53b>](https://localhost:8080/#) in <cell line: 4>()
2 import torch
3
----> 4 pt_state_dict = safetensors.torch.load_file(lora_model_path, device="gpu")
5 torch.save(pt_state_dict, "pt_state_dict.bin")
AttributeError: module 'safetensors' has no attribute 'torch'
Hey Athenawisdoms ! I can't tell if that is your real name but I wanted to Thank you. You uploading your problem made me solve a problem I have been facing for the past 2 weeks, no exaggeration. I was sleepless and I can now finally get some well deserved z-z's, Jesus Christ what a roller coaster these past 2 weeks were. Thanks !
occur the same problem, the solution. (ref:https://huggingface.co/docs/safetensors/api/torch)
from safetensors.torch import load_file
import torch
lora_model_path = 'adapter_model.safetensors'
bin_model_path = 'adapter_model.bin'
torch.save(load_file(lora_model_path), bin_model_path)
I am trying to convert a LORA safetensor into a
.bin
so it can be used in theStableDiffusionPipeline
, for example:So I tried converting safetensor file to bin by doing
but its throwing an error
What did I do wrong?
Is there a better way of doing this conversion? Or a hugging space that provides this function?