Closed rupeshs closed 9 months ago
Please download the weights from here first: latent-consistency/lcm-lora-sdv1-5
. Put that in a directory and then run your code:
from diffusers import DiffusionPipeline, LCMScheduler
pipeline = DiffusionPipeline.from_pretrained(
"Lykon/dreamshaper-8",
local_files_only=True,
)
pipeline.load_lora_weights("your-dir-where-the-weights-are-located")
@sayakpaul This model already cached still not working. seems like lora_state_dict
is not considering local_files_only
argument.
Tried this one also pipeline.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", local_files_only=True)
Is the model cached in latent-consistency/lcm-lora-sdv1-5
?
yes
When you specify latent-consistency/lcm-lora-sdv1-5
, the internal norm is to look for a location on the Hugging Face Hub.
This is why, I think you should:
latent-consistency/lcm-lora-sdv1-5
. lora-lcm
. pipeline.load_lora_weights("lora-lcm")
.@sayakpaul Then I'm wondering what is the purpose of local_files_only
argument with load_lora_weights
function? It should load from the cached folder weights(Not manually downloaded), right?
Please note that DiffusionPipeline
is working fine with local_files_only
argument and it can load already cached weights. It seems like there is some problem with load_lora_weights
.
Hmm, I am unable to reproduce this.
This is what I did.
I downloaded a LoRA checkpoint like so (with internet turned on):
from huggingface_hub import hf_hub_download
repo_id = "sayakpaul/new-lora-check-v15"
lora_id = "pytorch_lora_weights.safetensors"
ckpt_path = hf_hub_download(repo_id=repo_id, filename=lora_id)
I, then turned my internet off and ran:
from huggingface_hub import hf_hub_download
repo_id = "sayakpaul/new-lora-check-v15"
lora_id = "pytorch_lora_weights.safetensors"
ckpt_path = hf_hub_download(repo_id=repo_id, filename=lora_id, local_files_only=True)
It worked fine.
I am showing hf_hub_download
because that is what we use inside of load_lora_weights()
. Relevant call sites (ordered):
Cc: @Wauplin. Anything I am missing here?
@sayakpaul I just integrated diffusers with FastSD CPU, while testing I found this issue.
Current status of offline workflows with FastSD CPU. LCM - Working (Diffusion pipeline) LCM LoRA - Not working (Diffusion pipeline +Load lora ) LCM OpenVINO - Working (OV pipeline)
Refer : https://github.com/rupeshs/fastsdcpu/blob/main/src/backend/pipelines/lcm_lora.py
Hmm, I am unable to reproduce this.
This is what I did.
I downloaded a LoRA checkpoint like so (with internet turned on):
from huggingface_hub import hf_hub_download repo_id = "sayakpaul/new-lora-check-v15" lora_id = "pytorch_lora_weights.safetensors" ckpt_path = hf_hub_download(repo_id=repo_id, filename=lora_id)
I, then turned my internet off and ran:
from huggingface_hub import hf_hub_download repo_id = "sayakpaul/new-lora-check-v15" lora_id = "pytorch_lora_weights.safetensors" ckpt_path = hf_hub_download(repo_id=repo_id, filename=lora_id, local_files_only=True)
It worked fine.
I am showing
hf_hub_download
because that is what we use inside ofload_lora_weights()
. Relevant call sites (ordered):
- https://github.com/huggingface/diffusers/blob/2a111bc9febb6121bc270830c0afa302b3337490/src/diffusers/loaders/lora.py#L105
- https://github.com/huggingface/diffusers/blob/2a111bc9febb6121bc270830c0afa302b3337490/src/diffusers/loaders/lora.py#L234
- https://github.com/huggingface/diffusers/blob/2a111bc9febb6121bc270830c0afa302b3337490/src/diffusers/utils/hub_utils.py#L283
Cc: @Wauplin. Anything I am missing here?
Can you try with exact code I have attached with issue?
Can you try with exact code I have attached with issue?
Yeah tried with local_files_only
specified to True
for load_lora_weights()
. Didn't work without internet connection.
I haven't investigate more but looks like a duplicate of #6089 no?
Maybe try:
kwargs = {"local_files_only": True, "weight_name": "pytorch_lora_weights.safetensors"} pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", **kwargs)
This should pick up a previously downloaded lcm-lora from the local disk hub cache while being offline, i.e. HF_HUB_OFFLINE or guarded sockets (HF_HUB_OFFLINE is ignored here and there, ahem). At least works here. Note the variable name 'weight_name' instead of 'filename'
HF_HUB_OFFLINE is ignored here and there, ahem
Yes indeed. A PR to fix this is in progress: https://github.com/huggingface/huggingface_hub/pull/1899. This way it ensure any calls are explicitly blocked.
@rupeshs could you update your installations of huggingface_hub
and diffusers
to be from source (i.e., install them from the main
branch) and see if the messages you're seeing are better and are helping you resolve the problem?
@sayakpaul yes tried.
That is exactly expected here. You must specify the weight name as indicated in the error message.
Thanks for the proper error message.
Feel free to close the issue if you feel like so.
Thanks @sayakpaul for the great support.
Describe the bug
The
load_lora_weights
is not working offline.Reproduction
Sample code to reproduce this issue. (Turn off the internet)
Logs
System Info
diffusers
version: 0.23.0Who can help?
@sayakpaul @patrickvonplaten