Closed 1blackbar closed 1 year ago
You're can run locally.
pipe = StableDiffusionPipeline.from_pretrained('path to weights on your computer')
pipe = pipe.to("cuda")
Hey @1blackbar,
Thanks for the issue - it shows that we're not doing a good job at advertising that people can load weights locally. Is there a place where you think we should have it mentioned better?
Also cc @osanseviero @pcuenca @patil-suraj @anton-l
I mean iyou should be able to just type in your tokenized acces code, if you really want people to use it , its just doesnt work unless its not in conda, works in windows terminal but i dont want to run it under that. A local code should not use external weights as defauilt let alone with a codes, yes sometimes dev do that to help peope obtain weights faster but cmon, i can download the file and put it up in folder but i cant fix your code for getting tokenized acces codes therefore im stuck unless i know some AI basics and i think im being forced to... With tht being said, the code is still wonderful, we have a revolution here in art /visual world cause of SD and im grateful, dont get me wrong.
You're can run locally.
pipe = StableDiffusionPipeline.from_pretrained('path to weights on your computer') pipe = pipe.to("cuda")
this in readme would save A LOT of time for a lot of people
We need to make this indeed even clearer in the README then! @pcuenca @anton-l @patil-suraj - ideas what we can do here?
It's written in Quickstart: https://huggingface.co/docs/diffusers/quicktour and some other places - but I'm wondering if we should make it the "default" way of using diffusers, by putting it first in most readmes. Wdyt?
I think it's a bit too much to put it first, as most people that visit the github project won't have the weights. I'd maybe do the following in the README file:
mps
), as there has been some confusion about whether or not to use autocast
in that case.If this sounds ok I can open a PR to propose some specific wording and examples.
I am also stuck on maybe a related issue running locally... is it the case that only local paths in the huggingface cache are valid for from_pretrained?
pretrained_model_name_or_path = "/home/ubuntu/stable-diffusion-v1-4"
pipe = StableDiffusionPipeline.from_pretrained(
pretrained_model_name_or_path,
local_files_only = True,
)
generates
No such file or directory: '/root/.cache/huggingface/diffusers/models----home--ubuntu--stable-diffusion-v1-4/refs/main'
Maybe I am missing something simple - the documentation has
Activate the special ["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use this method in a firewalled environment.
but the link is not working. Any help appreciated.
Edit: this is more related to another thread[https://github.com/huggingface/diffusers/issues/150] - appears this is impossible unless the model was saved using the official method.
Thanks for the issue @sjkoelle ! I think the problem is the following, your directory "/home/ubuntu/stable-diffusion-v1-4"
doesn't exist.
What does this line give you:
import os
print(os.path.isfile("/home/ubuntu/stable-diffusion-v1-4"))
If you pass a path to a local checkpoint, you don't need to set local_files_only=True
because this if statement should be True:
https://github.com/huggingface/diffusers/blob/83a7bb2aba2d897ab95d96fb03eda10a624080e7/src/diffusers/pipeline_utils.py#L287
To prevent running around in circles on this issue, this problem will remain unresolved until there is provided option to load a custom model / path (ckpt, pt, etc.) into the Diffusers pipeline.
As it stands currently, if you don't download it the HF way, you'll have to do some digging to get it working. You can run it locally, but it consists of messing with blobs in the .cache
directory. However, this may possibly introduce edge cases due to how Diffusers works, so it may be best to use the original implementation until this is considered.
In my opinion, while the Diffusers pipeline is great, the friction introduced by streamlining things heavily creates too much friction. There should be more than one way to load things into the pipeline, rather than wait for a community based implementation from the original maintainers (which may or may not happen).
Hmmm, yes it is a file not a folder, so false. Thanks for the quick response, and also your ExponentialML which addresses the question behind the question.
Hey @ExponentialML,
Thanks a lot for the feedback here - I think we haven't done a great job at showing how to easily download this model and run it locally. It's literally as easy as doing:
git lfs install
git clone https://huggingface.co/CompVis/stable-diffusion-v1-4
Followed by:
generator = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-4")
-> no need for an authentication token or cache or whatsoever. It's also explained here: https://huggingface.co/docs/diffusers/quicktour
Given that you're not the first to mention this problem, I'll open an issue now about providing better documentation.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Describe the bug
this crap https://discuss.huggingface.co/t/how-to-login-to-huggingface-hub-with-access-token/22498/5
So... just let us use our own path to our folders and ckpt file ok ? Like a local thing you know ?
Reproduction
try it on miniconda, good luck !!!
Logs
System Info
win10 miniconda ldm env