AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
136.51k stars 26k forks source link

[Bug]: Automatic1111 downloading a 4 GB file everytime I try to load the SD2.1 model #7190

Open z10t10 opened 1 year ago

z10t10 commented 1 year ago

Is there an existing issue for this?

What happened?

Everytime I try to load the Stable Diffusion 2.1 ema pruned model a file of 4 GB is downloaded and I can't run A1111 offline. I've already moved the .chache folder to C:\ but nothing changed.

Steps to reproduce the problem

  1. On A1111 models tab
  2. Load SD2.1 ema pruned model
  3. The A1111 launcher will start a download of 3.94 GB file.
  4. I have to wait for the download to finish so I can start running SD

What should have happened?

As all of needed files, modules and dependencies are downloaded the model should be loaded without downloading any extra files.

Commit where the problem happens

webui-user

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

venv "C:\Users\Tariq\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 4af3ca5393151d61363c30eef4965e694eeac15e
Installing requirements for Web UI
loading Smart Crop reqs from C:\Users\Tariq\stable-diffusion-webui\extensions\sd_smartprocess\requirements.txt
Checking Smart Crop requirements.

Launching Web UI with arguments: --xformers
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading weights [625a2ba2] from C:\Users\Tariq\stable-diffusion-webui\models\Stable-diffusion\Anything-V3.0-pruned-fp32.safetensors
Applying xformers cross attention optimization.
Model loaded.
Loaded a total of 26 textual inversion embeddings.
Embeddings: AnalogFilm768, CharTurner, Cinema768-Digital, dblx, EMB_sksmakimatest, FloralMarble, InkPunk768, learned_embeds, mid512, midjourney, midjourney2, midjourney768, mj768, Neg_Facelift768, picoftaraj, pixelart-1, PlanIt, SCG768-Euphoria, SDA768, TungstenDispo, WEBUI
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Loading config from: C:\Users\Tariq\stable-diffusion-webui\models\Stable-diffusion\v2-1_768-ema-pruned.yaml
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
Downloading: 100%|████████████████████████████████████████████████████████████████| 3.94G/3.94G [04:56<00:00, 13.3MB/s]
Loading weights [4bdfc29c] from C:\Users\Tariq\stable-diffusion-webui\models\Stable-diffusion\v2-1_768-ema-pruned.ckpt
Applying xformers cross attention optimization.
Model loaded.

Additional information, context and logs

The model is also taking a long time to load even after the 3.94 G file is downloaded. (about extra 5 minutes) I'm running it on Windows 11 RTX 2060 Intel Core i7-10750H samsung mzvlb512hbjq-00000 (Supposed to be a fast SSD)

I've looked online for some solutions the only thing I've found was to move .cache folder to C:\ and that didn't work for me.

unheilbargut commented 1 year ago

Same problem here. But it extends to all 2.x models.

FatherGeoffHorton commented 1 year ago

Me three. I looked through launch.py and webui.py, but I can't find where it's doing any version checks.

z10t10 commented 1 year ago

Me three. I looked through launch.py and webui.py, but I can't find where it's doing any version checks.

Same problem here. But it extends to all 2.x models.

Are you also running it on Windows 11?

FatherGeoffHorton commented 1 year ago

Yes.

econundrum commented 1 year ago

I'm running Windows 11 it doesn't do it for me, although I start with CPU only for 2.1 as my GPU only has 4gb RAM, fine for 1.5 but not 2.1 models.

Try opening on CPU only and see if it still does it.

muerrilla commented 1 year ago

Me three. I looked through launch.py and webui.py, but I can't find where it's doing any version checks.

Same problem here. But it extends to all 2.x models.

Are you also running it on Windows 11?

Having the same problem with the v2 depth model, on windows 10.

aboodvan commented 1 year ago

Same here.

It'll download from the internet every time.

I believe all model where yaml file is present would be downloaded. at least for me.

muerrilla commented 1 year ago

Ok so after going insane searching my computer for a 3.94GB file and not finding any, I realized if I break the code execution mid-download I might get a hint. The download is triggered upon the creating of open-clip-based models, which I think is what makes it specific to v2 models, and the file in question is an open_clip_pytorch_model.bin though I still can't find such file anywhere even after it is downloaded every time. Here's the full error log if anyone wants to examine this further:

File "e:\stable-diffusion-webui\launch.py", line 295, in start() File "e:\stable-diffusion-webui\launch.py", line 290, in start webui.webui() File "e:\stable-diffusion-webui\webui.py", line 133, in webui initialize() File "e:\stable-diffusion-webui\webui.py", line 63, in initialize modules.sd_models.load_model() File "E:\stable-diffusion-webui\modules\sd_models.py", line 312, in load_model sd_model = instantiate_from_config(sd_config.model) File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict())) File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1688, in init super().init(concat_keys=concat_keys, *args, *kwargs) File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1509, in init super().init(args, kwargs) File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in init self.instantiate_cond_stage(cond_stage_config) File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage model = instantiate_from_config(config) File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_fromstr(config["target"])(**config.get("params", dict())) File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 147, in init model, , _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version) File "e:\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 201, in create_model_and_transforms model = create_model( File "e:\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 159, in create_model checkpoint_path = download_pretrained(pretrained_cfg, cache_dir=cache_dir) File "e:\stable-diffusion-webui\venv\lib\site-packages\open_clip\pretrained.py", line 312, in download_pretrained target = download_pretrained_from_hf(model_id, cache_dir=cache_dir) File "e:\stable-diffusion-webui\venv\lib\site-packages\open_clip\pretrained.py", line 282, in download_pretrained_from_hf cached_file = hf_hub_download(model_id, filename, revision=revision, cache_dir=cache_dir) File "e:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1226, in hf_hub_download http_get( File "e:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 490, in http_get for chunk in r.iter_content(chunk_size=1024): File "e:\stable-diffusion-webui\venv\lib\site-packages\requests\models.py", line 753, in generate for chunk in self.raw.stream(chunk_size, decode_content=True): File "e:\stable-diffusion-webui\venv\lib\site-packages\urllib3\response.py", line 627, in stream data = self.read(amt=amt, decode_content=decode_content) File "e:\stable-diffusion-webui\venv\lib\site-packages\urllib3\response.py", line 566, in read data = self._fp_read(amt) if not fp_closed else b"" File "e:\stable-diffusion-webui\venv\lib\site-packages\urllib3\response.py", line 532, in _fp_read return self._fp.read(amt) if amt is not None else self._fp.read() File "C:\Users\XXXX\anaconda3\envs\SDwebui\lib\http\client.py", line 465, in read s = self.fp.read(amt) File "C:\Users\XXXX\anaconda3\envs\SDwebui\lib\socket.py", line 705, in readinto return self._sock.recv_into(b) File "C:\Users\XXXX\anaconda3\envs\SDwebui\lib\ssl.py", line 1274, in recv_into return self.read(nbytes, buffer) File "C:\Users\XXXX\anaconda3\envs\SDwebui\lib\ssl.py", line 1130, in read return self._sslobj.read(len, buffer)

muerrilla commented 1 year ago

ok so I think this is the exact same issue as #5108 .

ssokolow commented 6 months ago

Should I open a separate issue for the following?

I have to wait a literal minute for webui.sh to re-download isnetis.onnx on my 25Mbit Internet every time I start it (and that's just that one step) and, given there's some kind of CUDA memory leak which requires me to restart automatic1111 every few dozen images (is there a way to flush CUDA memory without doing that?) on my 3060 12GB, that's a lot of wasted time and bandwidth.

Launching Web UI with arguments:
WARNING:matplotlib.font_manager:Matplotlib is building the font cache; this may take a moment.
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
isnetis.onnx: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 176M/176M [01:01<00:00, 2.85MB/s]
Loading weights [5dd07a46d7] from [REDACTED]
Creating model from config: [REDACTED]
vocab.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 961k/961k [00:00<00:00, 3.02MB/s]
merges.txt: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 3.02MB/s]
special_tokens_map.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 389/389 [00:00<00:00, 1.56MB/s]
tokenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 905/905 [00:00<00:00, 4.45MB/s]
config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.52k/4.52k [00:00<00:00, 5.59MB/s]
Loading VAE weights specified in settings: [REDACTED]

(I know it's something to do with automatic1111 because, when I don't need LoCon support, I much prefer easy-diffusion's UI design and it doesn't suffer from CUDA memory leaks.)

At the moment, I'm trying to figure out why Firejail doesn't have a simple answer in its documentation for firewalling an application off from everything except 127.0.0.1 so I can work around the problem.

Never mind. After waiting for grep to run through the entirety of my automatic1111 install, I traced it to an extension.