lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.65k stars 858 forks source link

cannot run sd forge anymore #2153

Closed cuom1705 closed 1 month ago

cuom1705 commented 1 month ago

After update today, I cannot generate image anymore


Already up to date.
venv "./venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-676-gffdf3a65
Commit hash: ffdf3a65efb98d2ec76be7c4ec49472ed42977a7
CUDA 12.1
Launching Web UI with arguments: --cuda-malloc --listen --enable-insecure-extension-access --ckpt-dir 'G:\My Drive\SD-Data\Model' --hypernetwork-dir 'G:\My Drive\SD-Data\Hypernetworks' --embeddings-dir 'G:\My Drive\SD-Data\Embeddings' --lora-dir 'G:\My Drive\SD-Data\Lora' --lyco-dir 'G:\My Drive\SD-Data\Lyco' --vae-dir 'G:\My Drive\SD-Data\VAE'
Using cudaMallocAsync backend.
Total VRAM 6144 MB, total RAM 16168 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 : cudaMallocAsync
VAE dtype preferences: [torch.float32] -> torch.float32
CUDA Using Stream: False
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: C:\Users\cuom\Development\Projects\sd-webui\sd-forge\models\ControlNetPreprocessor
CHv1.8.11: Get Custom Model Folder
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.9.0, num models: 22
Using sqlite file: C:\Users\cuom\Development\Projects\sd-webui\sd-forge\extensions\sd-webui-agent-scheduler\task_scheduler.sqlite3
09:22:11 - ReActor - STATUS - Running v0.7.1-b1 on Device: CUDA
Loading additional modules ... done.
CHv1.8.11: Set Proxy:
2024-10-23 09:22:23,379 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'G:\\My Drive\\SD-Data\\Model\\pony\\foxaiPONYFantastic_v1.safetensors', 'hash': '24cc95e7'}, 'additional_modules': ['G:\\My Drive\\SD-Data\\VAE\\sdxl_vae.safetensors'], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
IIB Database file has been successfully backed up to the backup folder.
Startup time: 141.6s (initial startup: 0.1s, prepare environment: 23.4s, import torch: 47.8s, initialize shared: 1.2s, other imports: 3.4s, setup gfpgan: 0.2s, list SD models: 1.3s, load scripts: 24.1s, initialize extra networks: 1.1s, initialize google blockly: 5.0s, create ui: 15.9s, gradio launch: 8.9s, app_started_callback: 9.5s).
Environment vars changed: {'stream': False, 'inference_memory': 2047.0, 'pin_shared_memory': False}
[GPU Setting] You will use 66.68% GPU memory (4096.00 MB) to load weights, and use 33.32% GPU memory (2047.00 MB) to do matrix computation.
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 83.33% GPU memory (5119.00 MB) to load weights, and use 16.67% GPU memory (1024.00 MB) to do matrix computation.
Model selected: {'checkpoint_info': {'filename': 'G:\\My Drive\\SD-Data\\Model\\pony\\foxaiPONYFantastic_v1.safetensors', 'hash': '24cc95e7'}, 'additional_modules': ['G:\\My Drive\\SD-Data\\VAE\\sdxl_vae.safetensors'], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Environment vars changed: {'stream': False, 'inference_memory': 2047.0, 'pin_shared_memory': False}
[GPU Setting] You will use 66.68% GPU memory (4096.00 MB) to load weights, and use 33.32% GPU memory (2047.00 MB) to do matrix computation.
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 83.33% GPU memory (5119.00 MB) to load weights, and use 16.67% GPU memory (1024.00 MB) to do matrix computation.
Environment vars changed: {'stream': False, 'inference_memory': 6103.0, 'pin_shared_memory': False}
[GPU Setting] You will use 0.65% GPU memory (40.00 MB) to load weights, and use 99.35% GPU memory (6103.00 MB) to do matrix computation.
Environment vars changed: {'stream': False, 'inference_memory': 2047.0, 'pin_shared_memory': False}
[GPU Setting] You will use 66.68% GPU memory (4096.00 MB) to load weights, and use 33.32% GPU memory (2047.00 MB) to do matrix computation.
Loading Model: {'checkpoint_info': {'filename': 'G:\\My Drive\\SD-Data\\Model\\pony\\foxaiPONYFantastic_v1.safetensors', 'hash': '24cc95e7'}, 'additional_modules': ['G:\\My Drive\\SD-Data\\VAE\\sdxl_vae.safetensors'], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done.
StateDict Keys: {'unet': 1680, 'vae': 250, 'text_encoder': 197, 'text_encoder_2': 518, 'ignore': 0}
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Press any key to continue . . .```
cuom1705 commented 1 month ago

My models: checkpoints, lora, etc. stored in a folder sync with google drive and I keep them online. Does it cause the problems recently ?

DenOfEquity commented 1 month ago

Try setting GPU weights to the default value (VRAM (MB) - 1024). The sudden 'Press any key to continue . . .' indicates lack of memory available for processing. The general guidance is #1474.