Closed neldot70 closed 3 days ago
Hi, I think the best option is to revert to the working version from June. Here’s the link to the discussion: https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/849
Thank you KyokaNyx, how should I proceed? Should I simply download that old version and put it in the main folder? Thanks again for your help!
I installed another working version and transferred all the models and other files.
But there is a similar way without reinstallation. https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/e95333c5568b5039f7118d036c16afa9a2a8ad2c is the latest working version for me. to revert back to that version: close forge and go to your forge installation directory, make sure you're in the webui folder, and hold shift and right click in a blank part of the folder and click "open command window here". in the cmd that pops up, type:
git checkout https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/e95333c5568b5039f7118d036c16afa9a2a8ad2c
and press enter. then in your webui folder edit webui-user.bat to add:
--no-gradio-queue
to your command line arguments. save it, then launch forge again. everything should work now.
KyokaNyx, thank you again and sorry to bother you with another question. I want to try the method that you suggested, but I don't know what is the version number I should revert to. The last one that worked for me was the first release with the new Gradio UI, the one that was released two or three days ago. Then yesterday, when I updated to the newest release, nothing worked anymore and I got the aforementioned errors.
So is there a link to the very first release with the new UI that I can get with git checkout? Thanks a lot!
Wonderful, the new update just released seems to have fixed things for me. Now Forge works again! However, thanks again for your patience and the knowledge you shared.
I rolled back Forge locally to the commit ID c8156fcf413db47dcf4c51cf7562cb5e94482c91 pushed on the 30th July. This fixed the error TypeError: 'NoneType' object is not iterable
that prevented it from generating using the default checkpoint. My guess is that the Gradio 4.40.0 update broke a lot of the built-in Web UI functions.
same error here. i am on latest commit [e81788e]
To load target model JointCLIPTextEncoder
Begin to load 1 model
Moving model(s) has taken 0.01 seconds
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "C:\DATA\SD\Data\Packages\Stable Diffusion WebUI Forge New\launch.py", line 51, in
Stable diffusion model failed to load i am using stability matrix to to install forge UI
I join the previous comments. And I will add a couple of details. In my case, the last working version turned out to be this one https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/af0b04cc16ce703b2f6b7a06edbb5f301a804a94. And already on the next one (https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/4d1be42975c20937b1cf7f0b6de47e1526cea62f) everything is broken. I'll attach the logs too. https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/af0b04cc16ce703b2f6b7a06edbb5f301a804a94:
D:\AI\NS02>git checkout af0b04c
M webui-user.bat
HEAD is now at af0b04cc store huggingface vars in VAE
venv "D:\AI\NS02\venv\Scripts\Python.exe"
initial startup: done in 0.023s
prepare environment:
checks: done in 0.007s
git version info: done in 0.107s
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f1.0.2v1.10.1-previous-52-gaf0b04cc
Commit hash: af0b04cc16ce703b2f6b7a06edbb5f301a804a94
torch GPU test: done in 2.369s
clone repositores: done in 0.180s
run extensions installers:
2024-08-07 15:00:44 DEBUG [root] Installing put extensions here.txt
run extensions_builtin installers:
2024-08-07 15:00:44 DEBUG [root] Installing extra-options-section
extra-options-section: done in 0.001s
2024-08-07 15:00:44 DEBUG [root] Installing forge_legacy_preprocessors
forge_legacy_preprocessors: done in 0.330s
2024-08-07 15:00:44 DEBUG [root] Installing forge_preprocessor_inpaint
forge_preprocessor_inpaint: done in 0.001s
2024-08-07 15:00:44 DEBUG [root] Installing forge_preprocessor_marigold
forge_preprocessor_marigold: done in 0.000s
2024-08-07 15:00:44 DEBUG [root] Installing forge_preprocessor_normalbae
forge_preprocessor_normalbae: done in 0.000s
2024-08-07 15:00:44 DEBUG [root] Installing forge_preprocessor_recolor
forge_preprocessor_recolor: done in 0.001s
2024-08-07 15:00:44 DEBUG [root] Installing forge_preprocessor_reference
forge_preprocessor_reference: done in 0.000s
2024-08-07 15:00:44 DEBUG [root] Installing forge_preprocessor_revision
forge_preprocessor_revision: done in 0.000s
2024-08-07 15:00:44 DEBUG [root] Installing forge_preprocessor_tile
forge_preprocessor_tile: done in 0.001s
2024-08-07 15:00:44 DEBUG [root] Installing LDSR
LDSR: done in 0.000s
2024-08-07 15:00:44 DEBUG [root] Installing Lora
Lora: done in 0.000s
2024-08-07 15:00:44 DEBUG [root] Installing mobile
mobile: done in 0.000s
2024-08-07 15:00:44 DEBUG [root] Installing postprocessing-for-training
postprocessing-for-training: done in 0.000s
2024-08-07 15:00:44 DEBUG [root] Installing prompt-bracket-checker
prompt-bracket-checker: done in 0.001s
2024-08-07 15:00:44 DEBUG [root] Installing ScuNET
ScuNET: done in 0.000s
2024-08-07 15:00:44 DEBUG [root] Installing sd_forge_controlllite
sd_forge_controlllite: done in 0.000s
2024-08-07 15:00:44 DEBUG [root] Installing sd_forge_controlnet
sd_forge_controlnet: done in 0.336s
2024-08-07 15:00:45 DEBUG [root] Installing sd_forge_dynamic_thresholding
sd_forge_dynamic_thresholding: done in 0.001s
2024-08-07 15:00:45 DEBUG [root] Installing sd_forge_fooocus_inpaint
sd_forge_fooocus_inpaint: done in 0.000s
2024-08-07 15:00:45 DEBUG [root] Installing sd_forge_freeu
sd_forge_freeu: done in 0.000s
2024-08-07 15:00:45 DEBUG [root] Installing sd_forge_ipadapter
sd_forge_ipadapter: done in 0.001s
2024-08-07 15:00:45 DEBUG [root] Installing sd_forge_kohya_hrfix
sd_forge_kohya_hrfix: done in 0.000s
2024-08-07 15:00:45 DEBUG [root] Installing sd_forge_latent_modifier
sd_forge_latent_modifier: done in 0.000s
2024-08-07 15:00:45 DEBUG [root] Installing sd_forge_multidiffusion
sd_forge_multidiffusion: done in 0.001s
2024-08-07 15:00:45 DEBUG [root] Installing sd_forge_neveroom
sd_forge_neveroom: done in 0.000s
2024-08-07 15:00:45 DEBUG [root] Installing sd_forge_perturbed_attention
sd_forge_perturbed_attention: done in 0.000s
2024-08-07 15:00:45 DEBUG [root] Installing sd_forge_photomaker
sd_forge_photomaker: done in 0.000s
2024-08-07 15:00:45 DEBUG [root] Installing sd_forge_sag
sd_forge_sag: done in 0.000s
2024-08-07 15:00:45 DEBUG [root] Installing sd_forge_stylealign
sd_forge_stylealign: done in 0.001s
2024-08-07 15:00:45 DEBUG [root] Installing soft-inpainting
soft-inpainting: done in 0.000s
2024-08-07 15:00:45 DEBUG [root] Installing SwinIR
SwinIR: done in 0.000s
Launching Web UI with arguments: --log-startup --api-log --loglevel=DEBUG
2024-08-07 15:00:46 DEBUG [httpx] load_ssl_context verify=True cert=None trust_env=True http2=False
2024-08-07 15:00:46 DEBUG [httpx] load_verify_locations cafile='D:\\AI\\NS02\\venv\\lib\\site-packages\\certifi\\cacert.pem'
2024-08-07 15:00:46 DEBUG [httpx] load_ssl_context verify=True cert=None trust_env=True http2=False
2024-08-07 15:00:46 DEBUG [httpx] load_verify_locations cafile='D:\\AI\\NS02\\venv\\lib\\site-packages\\certifi\\cacert.pem'
2024-08-07 15:00:46 DEBUG [httpx] load_ssl_context verify=True cert=None trust_env=True http2=False
2024-08-07 15:00:46 DEBUG [httpx] load_verify_locations cafile='D:\\AI\\NS02\\venv\\lib\\site-packages\\certifi\\cacert.pem'
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing BlpImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing BmpImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing BufrStubImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing CurImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing DcxImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing DdsImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing EpsImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing FitsImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing FitsStubImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing FliImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing FpxImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Image: failed to import FpxImagePlugin: No module named 'olefile'
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing FtexImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing GbrImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing GifImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing GribStubImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing Hdf5StubImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing IcnsImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing IcoImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing ImImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing ImtImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing IptcImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing JpegImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing Jpeg2KImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing McIdasImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing MicImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Image: failed to import MicImagePlugin: No module named 'olefile'
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing MpegImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing MpoImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing MspImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing PalmImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing PcdImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing PcxImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing PdfImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing PixarImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing PngImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing PpmImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing PsdImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing QoiImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing SgiImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing SpiderImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing SunImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing TgaImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing TiffImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing WebPImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing WmfImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing XbmImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing XpmImagePlugin
2024-08-07 15:00:46 DEBUG [PIL.Image] Importing XVThumbImagePlugin
launcher: done in 2.996s
Total VRAM 4096 MB, total RAM 32716 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
Set vram state to: LOW_VRAM
Device: cuda:0 NVIDIA GeForce GTX 970 : native
VAE dtype: torch.float32
CUDA Stream Activated: False
import torch: done in 2.425s
2024-08-07 15:00:50 DEBUG [matplotlib] matplotlib data path: D:\AI\NS02\venv\lib\site-packages\matplotlib\mpl-data
2024-08-07 15:00:50 DEBUG [matplotlib] CONFIGDIR=C:\Users\mitia\.matplotlib
2024-08-07 15:00:50 DEBUG [matplotlib] interactive is False
2024-08-07 15:00:50 DEBUG [matplotlib] platform is win32
2024-08-07 15:00:50 DEBUG [matplotlib] CACHEDIR=C:\Users\mitia\.matplotlib
2024-08-07 15:00:50 DEBUG [matplotlib.font_manager] Using fontManager instance from C:\Users\mitia\.matplotlib\fontlist-v390.json
import torch: done in 1.680s
import gradio: done in 0.000s
2024-08-07 15:00:53 DEBUG [git.cmd] Popen(['git', 'version'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=None)
2024-08-07 15:00:53 DEBUG [git.cmd] Popen(['git', 'version'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=None)
initialize shared: done in 0.147s
Using pytorch cross attention
Using pytorch cross attention
other imports: done in 0.941s
opts onchange: done in 0.000s
setup SD model: done in 0.000s
setup codeformer: done in 0.002s
setup gfpgan: done in 0.011s
set samplers: done in 0.000s
list extensions: done in 0.006s
restore config state file: done in 0.000s
list SD models: done in 0.011s
list localizations: done in 0.001s
load scripts:
custom_code.py: done in 0.007s
img2imgalt.py: done in 0.000s
loopback.py: done in 0.001s
outpainting_mk_2.py: done in 0.000s
poor_mans_outpainting.py: done in 0.000s
postprocessing_codeformer.py: done in 0.001s
postprocessing_gfpgan.py: done in 0.000s
postprocessing_upscale.py: done in 0.000s
prompt_matrix.py: done in 0.001s
prompts_from_file.py: done in 0.000s
sd_upscale.py: done in 0.001s
xyz_grid.py: done in 0.001s
ldsr_model.py: done in 0.662s
lora_script.py: done in 0.585s
scunet_model.py: done in 0.117s
swinir_model.py: done in 0.109s
extra_options_section.py: done in 0.001s
legacy_preprocessors.py: done in 0.012s
preprocessor_inpaint.py: done in 0.013s
preprocessor_marigold.py: done in 0.011s
preprocessor_normalbae.py: done in 0.006s
preprocessor_recolor.py: done in 0.001s
forge_reference.py: done in 0.001s
preprocessor_revision.py: done in 0.001s
preprocessor_tile.py: done in 0.000s
postprocessing_autosized_crop.py: done in 0.000s
postprocessing_caption.py: done in 0.001s
postprocessing_create_flipped_copies.py: done in 0.000s
postprocessing_focal_crop.py: done in 0.004s
postprocessing_split_oversized.py: done in 0.000s
forge_controllllite.py: done in 0.011s
ControlNet preprocessor location: D:\AI\NS02\models\ControlNetPreprocessor
controlnet.py: done in 1.093s
xyz_grid_support.py: done in 0.000s
forge_dynamic_thresholding.py: done in 0.004s
forge_fooocus_inpaint.py: done in 0.001s
forge_freeu.py: done in 0.005s
forge_ipadapter.py: done in 0.009s
kohya_hrfix.py: done in 0.003s
forge_latent_modifier.py: done in 0.005s
forge_multidiffusion.py: done in 0.009s
forge_never_oom.py: done in 0.001s
forge_perturbed_attention.py: done in 0.000s
forge_photomaker.py: done in 0.004s
forge_sag.py: done in 0.003s
forge_stylealign.py: done in 0.001s
soft_inpainting.py: done in 0.001s
comments.py: done in 0.119s
refiner.py: done in 0.000s
sampler.py: done in 0.001s
seed.py: done in 0.000s
load upscalers: done in 0.006s
refresh VAE: done in 0.001s
refresh textual inversion templates: done in 0.001s
scripts list_optimizers: done in 0.090s
scripts list_unets: done in 0.000s
reload hypernetworks: done in 0.001s
initialize extra networks: done in 0.003s
scripts before_ui_callback: done in 0.002s
2024-08-07 15:00:57 DEBUG [asyncio] Using selector: SelectSelector
Loading weights [15012c538f] from D:\AI\NS02\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
model_type EPS
UNet ADM Dimension 0
2024-08-07 15:00:59,026 - ControlNet - INFO - ControlNet UI callback registered.
create ui: done in 3.731s
2024-08-07 15:01:01 DEBUG [asyncio] Using selector: SelectSelector
Running on local URL: http://127.0.0.1:7860
2024-08-07 15:01:01 DEBUG [httpx] load_ssl_context verify=None cert=None trust_env=True http2=False
2024-08-07 15:01:01 INFO [httpx] HTTP Request: GET http://127.0.0.1:7860/startup-events "HTTP/1.1 200 OK"
2024-08-07 15:01:01 DEBUG [httpx] load_ssl_context verify=False cert=None trust_env=True http2=False
2024-08-07 15:01:02 INFO [httpx] HTTP Request: HEAD http://127.0.0.1:7860/ "HTTP/1.1 200 OK"
To create a public link, set `share=True` in `launch()`.
gradio launch: done in 1.413s
add APIs: done in 0.013s
app_started_callback:
lora_script.py: done in 0.002s
controlnet.py: done in 0.006s
Startup time: 20.8s (prepare environment: 3.4s, launcher: 3.0s, import torch: 4.1s, setup paths: 1.2s, initialize shared: 0.1s, other imports: 0.9s, load scripts: 2.8s, create ui: 3.7s, gradio launch: 1.4s).
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
To load target model SD1ClipModel
Begin to load 1 model
Moving model(s) has taken 0.00 seconds
Model loaded in 6.4s (load weights from disk: 0.2s, forge load real models: 5.5s, calculate empty prompt: 0.6s).
2024-08-07 15:01:47 DEBUG [matplotlib.pyplot] Loaded backend tkagg version 8.6.
2024-08-07 15:01:47 DEBUG [matplotlib.pyplot] Loaded backend agg version v2.2.
2024-08-07 15:01:47 DEBUG [matplotlib.pyplot] Loaded backend tkagg version 8.6.
2024-08-07 15:01:47 DEBUG [matplotlib.pyplot] Loaded backend agg version v2.2.
2024-08-07 15:01:47 DEBUG [git.cmd] Popen(['git', 'remote', 'get-url', '--all', 'origin'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=None)
2024-08-07 15:01:47 DEBUG [git.cmd] Popen(['git', 'cat-file', '--batch-check'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=<valid stream>)
2024-08-07 15:01:47 DEBUG [git.cmd] Popen(['git', 'cat-file', '--batch'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=<valid stream>)
2024-08-07 15:01:47 DEBUG [git.cmd] Popen(['git', 'remote', 'get-url', '--all', 'origin'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=None)
2024-08-07 15:01:47 DEBUG [git.cmd] Popen(['git', 'cat-file', '--batch-check'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=<valid stream>)
2024-08-07 15:01:47 DEBUG [git.cmd] Popen(['git', 'cat-file', '--batch'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=<valid stream>)
2024-08-07 15:01:51 INFO [modules.shared_state] Starting job task(y922hy4h2xflinv)
To load target model BaseModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 3205.4363288879395
[Memory Management] Model Memory (MB) = 3278.8199005126953
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = -1097.3835716247559
[Memory Management] Requested SYNC Preserved Memory (MB) = 1678.0279445648193
[Memory Management] Parameters Loaded to SYNC Stream (MB) = 1600.784927368164
[Memory Management] Parameters Loaded to GPU (MB) = 1678.02734375
Moving model(s) has taken 0.52 seconds
100%|█████████████████████████████████████████████████████████████████████████████████| 20/20 [00:19<00:00, 1.02it/s]
To load target model IntegratedAutoencoderKL██████████████████████████████████████████| 20/20 [00:18<00:00, 1.02it/s]
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 3187.3035163879395
[Memory Management] Model Memory (MB) = 319.11416244506836
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 1844.189353942871
Moving model(s) has taken 0.75 seconds
Total progress: 100%|█████████████████████████████████████████████████████████████████| 20/20 [00:19<00:00, 1.00it/s]
{"prompt": "cat", "all_prompts": ["cat"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 476248000, "all_seeds": [476248000], "subseed": 15303986, "all_subseeds": [15303986], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "DPM++ 2M", "cfg_scale": 7, "steps": 20, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "realisticVisionV51_v51VAE", "sd_model_hash": "15012c538f", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0.7, "extra_generation_params": {"Schedule type": "Karras"}, "index_of_first_image": 0, "infotexts": ["cat\nSteps: 20, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 7, Seed: 476248000, Size: 512x512, Model hash: 15012c538f, Model: realisticVisionV51_v51VAE, Version: f1.0.2v1.10.1-previous-52-gaf0b04cc"], "styles": [], "job_timestamp": "20240807150151", "clip_skip": 1, "is_using_inpainting_conditioning": false, "version": "f1.0.2v1.10.1-previous-52-gaf0b04cc"}
2024-08-07 15:02:13 INFO [modules.shared_state] Ending job task(y922hy4h2xflinv) (22.04 seconds)
D:\AI\NS02>git checkout 4d1be42
M webui-user.bat
HEAD is now at 4d1be429 Intergrate CLIP
venv "D:\AI\NS02\venv\Scripts\Python.exe"
initial startup: done in 0.022s
prepare environment:
checks: done in 0.008s
git version info: done in 0.107s
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f1.0.2v1.10.1-previous-53-g4d1be429
Commit hash: 4d1be42975c20937b1cf7f0b6de47e1526cea62f
torch GPU test: done in 2.400s
clone repositores: done in 0.181s
run extensions installers:
2024-08-07 15:06:26 DEBUG [root] Installing put extensions here.txt
run extensions_builtin installers:
2024-08-07 15:06:26 DEBUG [root] Installing extra-options-section
extra-options-section: done in 0.001s
2024-08-07 15:06:26 DEBUG [root] Installing forge_legacy_preprocessors
forge_legacy_preprocessors: done in 0.328s
2024-08-07 15:06:26 DEBUG [root] Installing forge_preprocessor_inpaint
forge_preprocessor_inpaint: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing forge_preprocessor_marigold
forge_preprocessor_marigold: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing forge_preprocessor_normalbae
forge_preprocessor_normalbae: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing forge_preprocessor_recolor
forge_preprocessor_recolor: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing forge_preprocessor_reference
forge_preprocessor_reference: done in 0.001s
2024-08-07 15:06:26 DEBUG [root] Installing forge_preprocessor_revision
forge_preprocessor_revision: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing forge_preprocessor_tile
forge_preprocessor_tile: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing LDSR
LDSR: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing Lora
Lora: done in 0.001s
2024-08-07 15:06:26 DEBUG [root] Installing mobile
mobile: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing postprocessing-for-training
postprocessing-for-training: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing prompt-bracket-checker
prompt-bracket-checker: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing ScuNET
ScuNET: done in 0.001s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_controlllite
sd_forge_controlllite: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_controlnet
sd_forge_controlnet: done in 0.326s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_dynamic_thresholding
sd_forge_dynamic_thresholding: done in 0.001s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_fooocus_inpaint
sd_forge_fooocus_inpaint: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_freeu
sd_forge_freeu: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_ipadapter
sd_forge_ipadapter: done in 0.001s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_kohya_hrfix
sd_forge_kohya_hrfix: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_latent_modifier
sd_forge_latent_modifier: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_multidiffusion
sd_forge_multidiffusion: done in 0.001s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_neveroom
sd_forge_neveroom: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_perturbed_attention
sd_forge_perturbed_attention: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_photomaker
sd_forge_photomaker: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_sag
sd_forge_sag: done in 0.001s
2024-08-07 15:06:26 DEBUG [root] Installing sd_forge_stylealign
sd_forge_stylealign: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing soft-inpainting
soft-inpainting: done in 0.000s
2024-08-07 15:06:26 DEBUG [root] Installing SwinIR
SwinIR: done in 0.000s
Launching Web UI with arguments: --log-startup --api-log --loglevel=DEBUG
2024-08-07 15:06:28 DEBUG [httpx] load_ssl_context verify=True cert=None trust_env=True http2=False
2024-08-07 15:06:28 DEBUG [httpx] load_verify_locations cafile='D:\\AI\\NS02\\venv\\lib\\site-packages\\certifi\\cacert.pem'
2024-08-07 15:06:28 DEBUG [httpx] load_ssl_context verify=True cert=None trust_env=True http2=False
2024-08-07 15:06:28 DEBUG [httpx] load_verify_locations cafile='D:\\AI\\NS02\\venv\\lib\\site-packages\\certifi\\cacert.pem'
2024-08-07 15:06:28 DEBUG [httpx] load_ssl_context verify=True cert=None trust_env=True http2=False
2024-08-07 15:06:28 DEBUG [httpx] load_verify_locations cafile='D:\\AI\\NS02\\venv\\lib\\site-packages\\certifi\\cacert.pem'
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing BlpImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing BmpImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing BufrStubImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing CurImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing DcxImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing DdsImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing EpsImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing FitsImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing FitsStubImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing FliImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing FpxImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Image: failed to import FpxImagePlugin: No module named 'olefile'
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing FtexImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing GbrImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing GifImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing GribStubImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing Hdf5StubImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing IcnsImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing IcoImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing ImImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing ImtImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing IptcImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing JpegImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing Jpeg2KImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing McIdasImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing MicImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Image: failed to import MicImagePlugin: No module named 'olefile'
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing MpegImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing MpoImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing MspImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing PalmImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing PcdImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing PcxImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing PdfImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing PixarImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing PngImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing PpmImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing PsdImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing QoiImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing SgiImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing SpiderImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing SunImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing TgaImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing TiffImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing WebPImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing WmfImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing XbmImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing XpmImagePlugin
2024-08-07 15:06:28 DEBUG [PIL.Image] Importing XVThumbImagePlugin
launcher: done in 2.909s
Total VRAM 4096 MB, total RAM 32716 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
Set vram state to: LOW_VRAM
Device: cuda:0 NVIDIA GeForce GTX 970 : native
VAE dtype: torch.float32
CUDA Stream Activated: False
import torch: done in 2.217s
2024-08-07 15:06:32 DEBUG [matplotlib] matplotlib data path: D:\AI\NS02\venv\lib\site-packages\matplotlib\mpl-data
2024-08-07 15:06:32 DEBUG [matplotlib] CONFIGDIR=C:\Users\mitia\.matplotlib
2024-08-07 15:06:32 DEBUG [matplotlib] interactive is False
2024-08-07 15:06:32 DEBUG [matplotlib] platform is win32
2024-08-07 15:06:32 DEBUG [matplotlib] CACHEDIR=C:\Users\mitia\.matplotlib
2024-08-07 15:06:32 DEBUG [matplotlib.font_manager] Using fontManager instance from C:\Users\mitia\.matplotlib\fontlist-v390.json
import torch: done in 1.623s
import gradio: done in 0.001s
2024-08-07 15:06:34 DEBUG [git.cmd] Popen(['git', 'version'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=None)
2024-08-07 15:06:34 DEBUG [git.cmd] Popen(['git', 'version'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=None)
initialize shared: done in 0.245s
Using pytorch cross attention
Using pytorch cross attention
other imports: done in 1.275s
opts onchange: done in 0.000s
setup SD model: done in 0.001s
setup codeformer: done in 0.001s
setup gfpgan: done in 0.017s
set samplers: done in 0.001s
list extensions: done in 0.013s
restore config state file: done in 0.000s
list SD models: done in 0.016s
list localizations: done in 0.001s
load scripts:
custom_code.py: done in 0.009s
img2imgalt.py: done in 0.000s
loopback.py: done in 0.001s
outpainting_mk_2.py: done in 0.001s
poor_mans_outpainting.py: done in 0.000s
postprocessing_codeformer.py: done in 0.000s
postprocessing_gfpgan.py: done in 0.001s
postprocessing_upscale.py: done in 0.000s
prompt_matrix.py: done in 0.000s
prompts_from_file.py: done in 0.001s
sd_upscale.py: done in 0.000s
xyz_grid.py: done in 0.002s
ldsr_model.py: done in 0.867s
lora_script.py: done in 0.707s
scunet_model.py: done in 0.116s
swinir_model.py: done in 0.111s
extra_options_section.py: done in 0.000s
legacy_preprocessors.py: done in 0.013s
preprocessor_inpaint.py: done in 0.013s
preprocessor_marigold.py: done in 0.010s
preprocessor_normalbae.py: done in 0.007s
preprocessor_recolor.py: done in 0.000s
forge_reference.py: done in 0.001s
preprocessor_revision.py: done in 0.000s
preprocessor_tile.py: done in 0.001s
postprocessing_autosized_crop.py: done in 0.000s
postprocessing_caption.py: done in 0.000s
postprocessing_create_flipped_copies.py: done in 0.001s
postprocessing_focal_crop.py: done in 0.003s
postprocessing_split_oversized.py: done in 0.000s
forge_controllllite.py: done in 0.011s
ControlNet preprocessor location: D:\AI\NS02\models\ControlNetPreprocessor
controlnet.py: done in 1.220s
xyz_grid_support.py: done in 0.001s
forge_dynamic_thresholding.py: done in 0.005s
forge_fooocus_inpaint.py: done in 0.000s
forge_freeu.py: done in 0.004s
forge_ipadapter.py: done in 0.007s
kohya_hrfix.py: done in 0.003s
forge_latent_modifier.py: done in 0.005s
forge_multidiffusion.py: done in 0.010s
forge_never_oom.py: done in 0.000s
forge_perturbed_attention.py: done in 0.001s
forge_photomaker.py: done in 0.003s
forge_sag.py: done in 0.003s
forge_stylealign.py: done in 0.001s
soft_inpainting.py: done in 0.000s
comments.py: done in 0.147s
refiner.py: done in 0.002s
sampler.py: done in 0.001s
seed.py: done in 0.000s
load upscalers: done in 0.010s
refresh VAE: done in 0.004s
refresh textual inversion templates: done in 0.001s
scripts list_optimizers: done in 0.098s
scripts list_unets: done in 0.001s
reload hypernetworks: done in 0.000s
initialize extra networks: done in 0.003s
scripts before_ui_callback: done in 0.002s
Loading weights [15012c538f] from D:\AI\NS02\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors
2024-08-07 15:06:39 DEBUG [asyncio] Using selector: SelectSelector
Skipped: unet = diffusers.UNet2DConditionModel
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
model_type EPS
UNet ADM Dimension 0
2024-08-07 15:06:41,537 - ControlNet - INFO - ControlNet UI callback registered.
create ui: done in 3.634s
2024-08-07 15:06:44 DEBUG [asyncio] Using selector: SelectSelector
Running on local URL: http://127.0.0.1:7860
2024-08-07 15:06:44 DEBUG [httpx] load_ssl_context verify=None cert=None trust_env=True http2=False
2024-08-07 15:06:44 INFO [httpx] HTTP Request: GET http://127.0.0.1:7860/startup-events "HTTP/1.1 200 OK"
2024-08-07 15:06:44 DEBUG [httpx] load_ssl_context verify=False cert=None trust_env=True http2=False
2024-08-07 15:06:44 INFO [httpx] HTTP Request: HEAD http://127.0.0.1:7860/ "HTTP/1.1 200 OK"
To create a public link, set `share=True` in `launch()`.
gradio launch: done in 1.483s
add APIs: done in 0.012s
app_started_callback:
lora_script.py: done in 0.001s
controlnet.py: done in 0.006s
Startup time: 21.4s (prepare environment: 3.4s, launcher: 2.9s, import torch: 3.8s, setup paths: 1.2s, initialize shared: 0.2s, other imports: 1.3s, load scripts: 3.3s, create ui: 3.6s, gradio launch: 1.5s).
To load target model JointCLIP
Begin to load 1 model
Moving model(s) has taken 0.00 seconds
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "D:\AI\NS02\launch.py", line 51, in <module>
main()
File "D:\AI\NS02\launch.py", line 47, in main
start()
File "D:\AI\NS02\modules\launch_utils.py", line 549, in start
main_thread.loop()
File "D:\AI\NS02\modules_forge\main_thread.py", line 37, in loop
task.work()
File "D:\AI\NS02\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "D:\AI\NS02\modules\sd_models.py", line 569, in get_sd_model
load_model()
File "D:\AI\NS02\modules\sd_models.py", line 700, in load_model
sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
File "D:\AI\NS02\modules\sd_models.py", line 596, in get_empty_cond
d = sd_model.get_learned_conditioning([""])
File "D:\AI\NS02\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\NS02\modules\sd_hijack_clip.py", line 313, in forward
return super().forward(texts)
File "D:\AI\NS02\modules\sd_hijack_clip.py", line 227, in forward
z = self.process_tokens(tokens, multipliers)
File "D:\AI\NS02\modules\sd_hijack_clip.py", line 269, in process_tokens
z = self.encode_with_transformers(tokens)
File "D:\AI\NS02\modules_forge\forge_clip.py", line 24, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
return self.text_model(
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
encoder_outputs = self.encoder(
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
layer_outputs = encoder_layer(
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 382, in forward
hidden_states = self.layer_norm1(hidden_states)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\NS02\backend\operations.py", line 132, in forward
return super().forward(x)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\normalization.py", line 196, in forward
return F.layer_norm(
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\functional.py", line 2543, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)
Stable diffusion model failed to load
2024-08-07 15:07:15 DEBUG [matplotlib.pyplot] Loaded backend tkagg version 8.6.
2024-08-07 15:07:15 DEBUG [matplotlib.pyplot] Loaded backend agg version v2.2.
2024-08-07 15:07:15 DEBUG [matplotlib.pyplot] Loaded backend tkagg version 8.6.
2024-08-07 15:07:15 DEBUG [matplotlib.pyplot] Loaded backend agg version v2.2.
2024-08-07 15:07:15 DEBUG [matplotlib.pyplot] Loaded backend tkagg version 8.6.
2024-08-07 15:07:15 DEBUG [matplotlib.pyplot] Loaded backend agg version v2.2.
2024-08-07 15:07:15 DEBUG [git.cmd] Popen(['git', 'remote', 'get-url', '--all', 'origin'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=None)
2024-08-07 15:07:15 DEBUG [git.cmd] Popen(['git', 'cat-file', '--batch-check'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=<valid stream>)
2024-08-07 15:07:15 DEBUG [git.cmd] Popen(['git', 'cat-file', '--batch'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=<valid stream>)
2024-08-07 15:07:15 DEBUG [git.cmd] Popen(['git', 'remote', 'get-url', '--all', 'origin'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=None)
2024-08-07 15:07:15 DEBUG [git.cmd] Popen(['git', 'cat-file', '--batch-check'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=<valid stream>)
2024-08-07 15:07:15 DEBUG [git.cmd] Popen(['git', 'cat-file', '--batch'], cwd=D:\AI\NS02, universal_newlines=False, shell=None, istream=<valid stream>)
2024-08-07 15:07:26 INFO [modules.shared_state] Starting job task(4ipd2kef90vp67y)
Loading weights [15012c538f] from D:\AI\NS02\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors
Skipped: unet = diffusers.UNet2DConditionModel
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
model_type EPS
UNet ADM Dimension 0
To load target model JointCLIP
Begin to load 1 model
Moving model(s) has taken 0.00 seconds
Traceback (most recent call last):
File "D:\AI\NS02\modules_forge\main_thread.py", line 37, in loop
task.work()
File "D:\AI\NS02\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "D:\AI\NS02\modules\txt2img.py", line 110, in txt2img_function
processed = processing.process_images(p)
File "D:\AI\NS02\modules\processing.py", line 805, in process_images
sd_models.reload_model_weights()
File "D:\AI\NS02\modules\sd_models.py", line 714, in reload_model_weights
return load_model(info)
File "D:\AI\NS02\modules\sd_models.py", line 700, in load_model
sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
File "D:\AI\NS02\modules\sd_models.py", line 596, in get_empty_cond
d = sd_model.get_learned_conditioning([""])
File "D:\AI\NS02\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\NS02\modules\sd_hijack_clip.py", line 313, in forward
return super().forward(texts)
File "D:\AI\NS02\modules\sd_hijack_clip.py", line 227, in forward
z = self.process_tokens(tokens, multipliers)
File "D:\AI\NS02\modules\sd_hijack_clip.py", line 269, in process_tokens
z = self.encode_with_transformers(tokens)
File "D:\AI\NS02\modules_forge\forge_clip.py", line 24, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
return self.text_model(
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
encoder_outputs = self.encoder(
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
layer_outputs = encoder_layer(
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 382, in forward
hidden_states = self.layer_norm1(hidden_states)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\NS02\backend\operations.py", line 132, in forward
return super().forward(x)
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\modules\normalization.py", line 196, in forward
return F.layer_norm(
File "D:\AI\NS02\venv\lib\site-packages\torch\nn\functional.py", line 2543, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)
2024-08-07 15:07:28 INFO [modules.shared_state] Ending job task(4ipd2kef90vp67y) (2.29 seconds)
*** Error completing request
*** Arguments: ('task(4ipd2kef90vp67y)', <gradio.route_utils.Request object at 0x0000013088D03400>, 'cat', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', None, 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 3, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', '', 0, '', '', 0, '', '', True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "D:\AI\NS02\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
---
And I also hope that it will be fixed.
An error occurs when using flux1-dev-fp8.safetensors or flux1-dev-bnb-nf4.safetensors:
[Memory Management] Current Free GPU Memory: 10827.56 MB
[Memory Management] Required Model Memory: 5154.62 MB
[Memory Management] Required Inference Memory: 1024.00 MB
[Memory Management] Estimated Remaining GPU Memory: 4648.94 MB
Moving model(s) has taken 1.51 seconds
Traceback (most recent call last):
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules_forge/main_thread.py", line 37, in loop
task.work()
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules_forge/main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/txt2img.py", line 110, in txt2img_function
processed = processing.process_images(p)
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/processing.py", line 809, in process_images
res = process_images_inner(p)
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/processing.py", line 922, in process_images_inner
p.setup_conds()
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/processing.py", line 1507, in setup_conds
super().setup_conds()
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/processing.py", line 494, in setup_conds
self.c = self.get_conds_with_caching(prompt_parser.get_multicond_learned_conditioning, prompts, total_steps, [self.cached_c], self.extra_network_data)
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/processing.py", line 463, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/prompt_parser.py", line 262, in get_multicond_learned_conditioning
learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps, hires_steps, use_old_scheduling)
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/prompt_parser.py", line 189, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "/root/miniconda3/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/diffusion_engine/flux.py", line 79, in get_learned_conditioning
cond_t5 = self.text_processing_engine_t5(prompt)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/text_processing/t5_engine.py", line 123, in __call__
z = self.process_tokens([tokens], [multipliers])[0]
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/text_processing/t5_engine.py", line 134, in process_tokens
z = self.encode_with_transformers(tokens)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/text_processing/t5_engine.py", line 60, in encode_with_transformers
z = self.text_encoder(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/nn/t5.py", line 205, in forward
return self.encoder(x, *args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/nn/t5.py", line 186, in forward
x, past_bias = l(x, mask, past_bias)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/nn/t5.py", line 162, in forward
x, past_bias = self.layer[0](x, mask, past_bias)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/nn/t5.py", line 149, in forward
output, past_bias = self.SelfAttention(self.layer_norm(x), mask=mask, past_bias=past_bias)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/nn/t5.py", line 138, in forward
out = attention_function(q, k * ((k.shape[-1] / self.num_heads) ** 0.5), v, self.num_heads, mask)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/attention.py", line 314, in attention_xformers
mask_out[:, :, :mask.shape[-1]] = mask
RuntimeError: The expanded size of the tensor (1) must match the existing size (64) at non-singleton dimension 0. Target sizes: [1, 256, 256]. Tensor sizes: [64, 256, 256]
The expanded size of the tensor (1) must match the existing size (64) at non-singleton dimension 0. Target sizes: [1, 256, 256]. Tensor sizes: [64, 256, 256]
*** Error completing request
*** Arguments: ('task(nubn3pyouehe2d7)', <gradio.route_utils.Request object at 0x7f23b6288d90>, '1girl', '', [], 1, 1, 1, 3.5, 1152, 896, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', None, 0, 20, 'Euler', 'Simple', False, -1, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 3, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', '', 0, '', '', 0, '', '', True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
I run ComfyUI in the same Python environment and use flux1-dev-fp8.safetensors or flux1-dev-bnb-nf4.safetensors. Everything works fine, which indicates that the issue is not caused by the running environment.
/root/autodl-tmp/ComfyUI
Total VRAM 11004 MB, total RAM 384809 MB
pytorch version: 2.3.1+cu118
xformers version: 0.0.27+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2080 Ti : cudaMallocAsync
Using xformers cross attention
[Prompt Server] web root: /root/autodl-tmp/ComfyUI/web
Successfully imported spandrel_extra_arches: support for non commercial upscale models.
Import times for custom nodes:
0.0 seconds: /root/autodl-tmp/ComfyUI/custom_nodes/websocket_image_save.py
0.0 seconds: /root/autodl-tmp/ComfyUI/custom_nodes/AIGODLIKE-COMFYUI-TRANSLATION
0.0 seconds: /root/autodl-tmp/ComfyUI/custom_nodes/ComfyUI_bitsandbytes_NF4
Starting server
To see the GUI go to: http://127.0.0.1:6006/
got prompt
model weight dtype torch.bfloat16, manual cast: torch.float16
model_type FLUX
Using xformers attention in VAE
Using xformers attention in VAE
/root/miniconda3/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
Requested to load FluxClipModel_
Loading 1 new model
Requested to load Flux
Loading 1 new model
100%|███████████████████████████████████████████| 20/20 [00:37<00:00, 1.89s/it]
Requested to load AutoencodingEngine
Loading 1 new model
Prompt executed in 53.73 seconds
got prompt
Requested to load Flux
Loading 1 new model
100%|███████████████████████████████████████████| 20/20 [00:37<00:00, 1.89s/it]
Prompt executed in 44.13 seconds
got prompt
It worked perfectly with the previous update that introduced the new UI. But after yesterday update I get this error when starting:
"RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)Stable diffusion model failed to load"
And If I try to generate an image, I get the: TypeError: 'NoneType' object is not iterable
Thank you for any help