Open Secrios opened 2 years ago
same issue, much slow than old version
same issue here, much much slower than older version
Same issue. much slower to load. intel 6700K 32GB ddr4 ram 3080 10gb
used to take 15-30 secodns to load now it takes 2+ minutes
It takes 1:18 secs to load from start... Used to take about 15 secs to load... What is happening?
Solved it! At least on my machine. Hope it works for the rest of you.
My fix was to delete the venv
folder and let the launch script automatically rebuild it. This will be slow since it has to download and process a few gigabytes of files. You may want to rename the old venv
instead of deleting it, just in case something goes wrong with the rebuild.
I ran a profiler and found it was spending a couple minutes on socket connections. Before the rebuild, some code in the venv was reaching out to the internet each time to check a few files, including the 1.6GB https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/pytorch_model.bin
.
I'm seeing the same issue. I tried deleting the venv folder as suggested by @GatesDA however it made no difference. It's stuck on "Loading weights" for about 2-3 minutes on every launch.
Any ideas or pointers?
@andypotato: Do you see the slowdown with both .ckpt and .safetensors models? I'm seeing slower loading times for .safetensors for some reason.
Hi,
Are your models located on a SSD? If they are on a regular conventional hdd, that could be the culprit.
Try installing Automatic1111 You can then add the option --xformers in the startup batch file (webui-user.bat) that will speed up image generating with 40%
Hope this helps.
@andypotato: Do you see the slowdown with both .ckpt and .safetensors models? I'm seeing slower loading times for .safetensors for some reason.
Same here! Slow to load only safetensors checkpoints
I did some testing on Windows with models on an HDD, the issue seems to be the way that models are loaded if I cache the model first by reading it like this cat model.safetensors > nul the model is loaded at the maximum my HDD can, around ~175MB /s then the WebUI read the model instantly from the windows cache. but!!! if I read the model directly from the HDD from the WebUI, the model is read at ~12MB /s
edit: I also confirm that this affect safetensors and not checkpoints
so... the WebUI safetensors reads models in a inefficient way (maybe in small chunks), even when the file is defragmented and sequentially can be read MUCH faster
Anywayz...using an ssd is way better...still...I did, and doesn't have problems with slow loading times anymore! I use a 2TB 980 Pro NVME
this is still a issue, just starting up on ubuntu on aws it takes 264 seconds to load ther first time, the next time 5 seconds. this is horrible when I am just trying to autoscale haha really really really annoying and i can't seem to fix it.
At the point of pulling out my hair and just giving up on my project tbh
my workaround was to convert all safetensors to checkpoints
My workarround was a new pc with rtx4090
Same problem on a i7 with rtx4090. If I kill the webui and restart it then the model is loaded, so I think it's a webui problem, which makes no sense to me.
I have moved all my models to a fast M.2. NVME drive, Problem solved! :D
this is still a issue. It take 600+s to load model ^@^@Applying optimization: xformers... done. Weights loaded in 319.5s (load weights from disk: 60.1s, apply weights to model: 258.8s, move model to device: 0.5s). ^@Loading weights [8e7ac9aa89] from /content/stable-diffusion-webui/models/Stable-diffusion/dalcefo_painting_v4-fp32-no-ema.safetensors ^@^@^@^@^@^@^@^@^@^@Applying optimization: xformers... done. Weights loaded in 614.1s (load weights from disk: 62.7s, apply weights to model: 550.8s, move model to device: 0.5s).
this is still a issue. It take 600+s to load model ^@^@Applying optimization: xformers... done. Weights loaded in 319.5s (load weights from disk: 60.1s, apply weights to model: 258.8s, move model to device: 0.5s). ^@Loading weights [8e7ac9aa89] from /content/stable-diffusion-webui/models/Stable-diffusion/dalcefo_painting_v4-fp32-no-ema.safetensors ^@^@^@^@^@^@^@^@^@^@Applying optimization: xformers... done. Weights loaded in 614.1s (load weights from disk: 62.7s, apply weights to model: 550.8s, move model to device: 0.5s).
Hi, do you have your models on a fast SSD?
@Pauweltje Yes, I use AWS gp3 ssd, It can provide 3000 IOPS
Bump. I also have this issue. The start of the webui is quite fast but changing models takes forever.
Is there an existing issue for this?
What happened?
When I run the batch file I get to about here on the command line: Already up to date. venv "C:\Users\ProSmg\AI\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)] Commit hash: 9b384dfb5c05129f50cc3f0262f89e8b788e5cf3 Installing requirements for Web UI Launching Web UI with arguments: --vae-path models\Stable-diffusion\vae-ft-mse-840000-ema-pruned.pt LatentInpaintDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.54 M params. making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels
The loading weights line takes 5 minutes after.
Steps to reproduce the problem
1: Start batch file 2: Wait for it to load much longer than usual. 3: If I chose to switch model, it takes another long time
What should have happened?
It should load moderately quickly
Commit where the problem happens
I think its the latest
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
No response
Additional information, context and logs
No response