Closed Harbitos closed 6 hours ago
*** Error loading script: preprocessor_inpaint.py
This is the error i described already here: https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu-forge/issues/13
It should work for you even with this error but i see a mistake you made. You tried to use SDXL model but you can forget about it with your GPU. This is why it crashing. You need to use 1.5 models because your GPU is too bad.
*** Error loading script: preprocessor_inpaint.py
This is the error i described already here: #13 It should work for you even with this error but i see a mistake you made. You tried to use SDXL model but you can forget about it with your GPU. This is why it crashing. You need to use 1.5 models because your GPU is too bad.
Adding --no-half --no-half-vae
removed a lot of error lines, but Stable Diffusion still does not generate, the model has been changed. the same thing seems to be happening.
The most important thing is that I used to generate 730 730 images with the SDXL model, Lora and the Forge Couple/ControlNet extension, everything worked recently!
venv "C:\Users\user\Desktop\Stable Diffusion Forge\stable-diffusion-webui-amdgpu-forge\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-1.10.1
Commit hash: 412c2d800dcae4cee8a7466a1e9128cfbbc5bf26
Using directml with device:
Total VRAM 1024 MB, total RAM 16328 MB
pytorch version: 2.3.1+cpu
Set vram state to: NORMAL_VRAM
Device: privateuseone
VAE dtype preferences: [torch.float32] -> torch.float32
Launching Web UI with arguments: --theme dark --directml --no-half --no-half-vae
CUDA Using Stream: False
Using sub quadratic optimization for cross attention
Using split attention for VAE
ONNX: version=1.19.2 provider=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
ControlNet preprocessor location: C:\Users\user\Desktop\Stable Diffusion Forge\stable-diffusion-webui-amdgpu-forge\models\ControlNetPreprocessor
2024-09-18 13:37:57,611 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'C:\\Users\\user\\Desktop\\Stable Diffusion Forge\\stable-diffusion-webui-amdgpu-forge\\models\\Stable-diffusion\\autismmixSDXL_autismmixConfetti.safetensors', 'hash': '10047b0e'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 17.9s (prepare environment: 4.6s, import torch: 6.7s, initialize shared: 1.1s, load scripts: 2.7s, create ui: 2.9s, gradio launch: 1.5s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 0.00% GPU memory (0.00 MB) to load weights, and use 100.00% GPU memory (1024.00 MB) to do matrix computation.
Loading Model: {'checkpoint_info': {'filename': 'C:\\Users\\user\\Desktop\\Stable Diffusion Forge\\stable-diffusion-webui-amdgpu-forge\\models\\Stable-diffusion\\autismmixSDXL_autismmixConfetti.safetensors', 'hash': '10047b0e'}, 'additional_modules': [], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for privateuseone:0 with 0 models keep loaded ... Done.
Press any key to continue . . .
The most important thing is that I used to generate 730 730 images with the SDXL model, Lora and the Forge Couple/ControlNet extension, everything worked recently!
I find it really hard to believe. And again you showed console log with SDXL model. Try to use SD1.5 model and tell whats the result.
Have you ever tried this model before and you can confirm it worked for you ? If it worked for you before then i guess last patch changed something to the SDXL models. It really just looks like your model can't be loaded properly.
This is why i always make a backup before updating SD
The most important thing is that I used to generate 730 730 images with the SDXL model, Lora and the Forge Couple/ControlNet extension, everything worked recently!
I find it really hard to believe. And again you showed console log with SDXL model. Try to use SD1.5 model and tell whats the result.
Have you ever tried this model before and you can confirm it worked for you ? If it worked for you before then i guess last patch changed something to the SDXL models. It really just looks like your model can't be loaded properly.
This is why i always make a backup before updating SD
venv "C:\Users\user\Desktop\Stable Diffusion Forge\stable-diffusion-webui-amdgpu-forge\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-1.10.1
Commit hash: 412c2d800dcae4cee8a7466a1e9128cfbbc5bf26
Using directml with device:
Total VRAM 1024 MB, total RAM 16328 MB
pytorch version: 2.3.1+cpu
Set vram state to: NORMAL_VRAM
Device: privateuseone
VAE dtype preferences: [torch.float32] -> torch.float32
Launching Web UI with arguments: --theme dark --directml
CUDA Using Stream: False
Using sub quadratic optimization for cross attention
Using split attention for VAE
ONNX: version=1.19.2 provider=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
ControlNet preprocessor location: C:\Users\user\Desktop\Stable Diffusion Forge\stable-diffusion-webui-amdgpu-forge\models\ControlNetPreprocessor
2024-09-18 15:41:20,647 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'C:\\Users\\user\\Desktop\\Stable Diffusion Forge\\stable-diffusion-webui-amdgpu-forge\\models\\Stable-diffusion\\epicrealism_pureEvolutionV3.safetensors', 'hash': '42c8440c'}, 'additional
_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 18.2s (prepare environment: 4.7s, import torch: 6.7s, initialize shared: 1.1s, list SD models: 0.1s, load scripts: 2.7s, create ui: 3.0s, gradio launch: 1.5s).
Loading Model: {'checkpoint_info': {'filename': 'C:\\Users\\user\\Desktop\\Stable Diffusion Forge\\stable-diffusion-webui-amdgpu-forge\\models\\Stable-diffusion\\epicrealism_pureEvolutionV3.safetensors', 'hash': '42c8440c'}, 'additional_
modules': [], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for privateuseone:0 with 0 models keep loaded ... Done.
StateDict Keys: {'unet': 686, 'vae': 248, 'text_encoder': 197, 'ignore': 0}
C:\Users\user\Desktop\Stable Diffusion Forge\stable-diffusion-webui-amdgpu-forge\venv\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by
default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
Press any key to continue . . .
I would not have reinstalled the SD if this error had not appeared on its own. But at the expense of the 1.5 model, NOTHING works, even this model. I think. that the problem is not in the SD, but somewhere programmatically.
The problem is solved! I have included a swap file for the system disk.
How did it happen? When I wanted to format another disk, there were hidden files on it that could not be deleted (swap files), I went into system performance and disabled the swap files for the disk, maybe at this point I accidentally disabled the swap files for the system disk. The most important thing is that during all this time my PC did not lag, the screen did not go out, everything seemed to be fine.
The problem is this, I launch SD, the tab opens, but when I click generate, it immediately writes "Connection errored out", but it didn't write anything in the console, it only writes "Press any key to continue...". A few days ago, everything worked for me! Stable Diffusion Forge could generate 730 730 images with SDXL and Lora models and ControlNet/Forge_Couple extensions. (I did not change the PC hardware). The problem appeared on its own (not after reinstalling!)
PC Features: Windows 10 RX 580 8GB Intel Xeon 1270 v3 80GB of free space on an ssd drive Internet speed is more than 300mb/s Installed tha latest version Git and Python 3.10.6
Actions that didn't help me: Reinstalling (using git clone or zip) - no Installing the original SD amd-gpu - no Checking for viruses - no Internet search - I didn't find the information I needed Change the browser - no Change the folder directory with SD - no Create a new user and try it out - no FULL reinstalling before Git and Python 3.10.6 - no Lower the resolution, lower the promptov, change the model - no Remove built-in extensions - no Wait a few days and reinstall - no deleting venv folder - no deleting the pip folder - no write instead of --ditecrml --use-directml - no Deleting all the latest installed programs before the SD worked - no Reinstall the AMD driver - no reinstall the system - I haven't tried it Install HIP SDK libraries and use ZLUDA - no
Facts: In launch.py added a line of code to fix the "gguf" error:
I think that some kind of engine has stopped working for me. The text from the console will be attached.