lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
7.58k stars 730 forks source link

TypeError: 'NoneType' object is not iterable #1015

Closed Set-PP closed 1 month ago

Set-PP commented 1 month ago

when I try to make image using flux1-dev-bnb-nf4.safetensors It show TypeError: 'NoneType' object is not iterable .

I am using webui webui_forge_cu124_torch24
core i 9 32 GB RAM with Rtx3070ti graphic.

Screenshot 2024-08-12 133924

protector131090 commented 1 month ago

same here

tritant commented 1 month ago

when I try to make image using flux1-dev-bnb-nf4.safetensors It show TypeError: 'NoneType' object is not iterable .

I am using webui webui_forge_cu124_torch24 core i 9 32 GB RAM with Rtx3070ti graphic.

Screenshot 2024-08-12 133924

Probably because you are using xformer, try removing it

therootx commented 1 month ago

when I try to make image using flux1-dev-bnb-nf4.safetensors It show TypeError: 'NoneType' object is not iterable . I am using webui webui_forge_cu124_torch24 core i 9 32 GB RAM with Rtx3070ti graphic. Screenshot 2024-08-12 133924

Probably because you are using xformer, try removing it

I also have same issue, I was running with xformers but removing it not solve the problem. Still same error.

Lenanetka commented 1 month ago

Macbook Air M1 with 16 GB RAM. Auto1111 web_UI is working for me. I run forge web ui with "./webui.sh" command, web ui is opened but generation always failed with the same error. I'm too noob to understand if I did something wrong. Here what I have with all default settings and with just 1 word in promt. Screenshot_2024_08_12_10_09_29 Screenshot_2024_08_12_10_13_01 Last login: Mon Aug 12 10:02:14 on ttys015 user@MacBook-Air stable-diffusion-webui-forge % ./webui.sh

################################################################ Install script for stable-diffusion + Web UI Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer. ################################################################

################################################################ Running on user user ################################################################

################################################################ Repo already cloned, using it as install directory ################################################################

################################################################ Create and activate python venv ################################################################

################################################################ Launching launch.py... ################################################################ Python 3.10.14 (main, Mar 19 2024, 21:46:16) [Clang 15.0.0 (clang-1500.3.9.4)] Version: f2.0.1v1.10.1-previous-247-g9f6d263c3 Commit hash: 9f6d263c3f3e07f2e7874263d36b528b887b3b26 Installing requirements Legacy Preprocessor init warning: Unable to install insightface automatically. Please try run pip install insightface manually. Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate Total VRAM 16384 MB, total RAM 16384 MB pytorch version: 2.3.1 Set vram state to: SHARED Device: mps VAE dtype preferences: [torch.float32] -> torch.float32 CUDA Using Stream: False Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled Using sub quadratic optimization for cross attention Using split attention for VAE ControlNet preprocessor location: /Volumes/AlphaBoss/stable-diffusion-webui-forge/models/ControlNetPreprocessor 2024-08-12 10:08:42,459 - ControlNet - INFO - ControlNet UI callback registered. Model selected: {'checkpoint_info': {'filename': '/Volumes/AlphaBoss/stable-diffusion-webui-forge/models/Stable-diffusion/flux1-dev-bnb-nf4.safetensors', 'hash': '0184473b'}, 'vae_filename': None, 'unet_storage_dtype': None} Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 10.6s (prepare environment: 2.3s, launcher: 1.9s, import torch: 2.0s, initialize shared: 0.2s, other imports: 1.1s, load scripts: 0.9s, create ui: 1.1s, gradio launch: 0.9s). Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False} Loading Model: {'checkpoint_info': {'filename': '/Volumes/AlphaBoss/stable-diffusion-webui-forge/models/Stable-diffusion/flux1-dev-bnb-nf4.safetensors', 'hash': '0184473b'}, 'vae_filename': None, 'unet_storage_dtype': None} StateDict Keys: {'transformer': 2350, 'vae': 244, 'text_encoder': 198, 'text_encoder_2': 220, 'ignore': 0} Using Detected T5 Data Type: torch.float8_e4m3fn Using Detected UNet Type: nf4 Using pre-quant state dict! Working with z of shape (1, 16, 32, 32) = 16384 dimensions. K-Model Created: {'storage_dtype': 'nf4', 'computation_dtype': torch.bfloat16} Model loaded in 10.4s (unload existing model: 0.2s, load state dict: 0.2s, forge model load: 10.0s). Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored. To load target model ModuleDict Begin to load 1 model Moving model(s) has taken 0.01 seconds Distilled CFG Scale: 3.5 To load target model KModel Begin to load 1 model Moving model(s) has taken 52.71 seconds 0%| | 0/20 [00:00<?, ?it/s]huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either:

HelloWarcraft commented 1 month ago

I solved the problem by the steps below: step1: change torch

according to Readme

Some other CUDA/Torch Versions:

Forge with CUDA 12.1 + Pytorch 2.3.1 <- Recommended

Forge with CUDA 12.4 + Pytorch 2.4 <- Fastest, but MSVC may be broken, xformers may not work

Forge with CUDA 12.1 + Pytorch 2.1 <- the previously used old environments

pip install torch==2.3.1+cu121 torchvision==0.18.1+cu121 --extra-index-url https://download.pytorch.org/whl/cu121

step2: uninstall xformers

pip uninstall xformers

step3: launch webUI

cd ./stable-diffusion-webui-forge
python launch.py

if you dont want to uninstall xformers, you should do two steps:

pip install torch==2.3.1+cu121 torchvision==0.18.1+cu121 --extra-index-url https://download.pytorch.org/whl/cu121
cd ./stable-diffusion-webui-forge
python launch.py --disable-xformers

If you are a windows user, you can edit run.bat, update.bat or environment.bat in text editor to change the torch version.


the command also works: pip install torch==2.4.0+cu121 torchvision==0.19.0+cu121 --extra-index-url https://download.pytorch.org/whl/cu121

jokero3answer commented 1 month ago

Doesn't --disable-xformers cause a serious drop in raw speed? @HelloWarcraft

HelloWarcraft commented 1 month ago

Doesn't --disable-xformers cause a serious drop in raw speed? @HelloWarcraft

Xformers really does well in accelating the process of image generation, but it will cause error here. That is to say, you can not generate image successfully when using xformers now.

I think the author would solve the problem soon.

therootx commented 1 month ago

I just changed the model I've used and updated Torch 2.3.1+cu118 to 2.3.1+cu121 but didn't disable xformers and it worked. image

HelloWarcraft commented 1 month ago

I just changed the model I've used and updated Torch 2.3.1+cu118 to 2.3.1+cu121 but didn't disable xformers and it worked.

Nice Have you installed xformers in you environment? I tried python launch.py --xformers with torch==2.3.1+cu121 just now, and got an error of NoneTypeError. So I thought changing the toch from cu124 to cu121 and removing or disabling xformers might be both important.

therootx commented 1 month ago

I just changed the model I've used and updated Torch 2.3.1+cu118 to 2.3.1+cu121 but didn't disable xformers and it worked.

Nice Have you installed xformers in you environment? I tried python launch.py --xformers with torch==2.3.1+cu121 just now, and got an error of NoneTypeError. So I thought changing the toch from cu124 to cu121 and removing or disabling xformers might be both important.

No I didn't add the --xformers into environment but I didn't uninstall it either. Torch and I think correct model usage is important. My fp8 model was incorrect. image I had 11gb one and changed it to 17gb one.

jokero3answer commented 1 month ago

image The other three work except for xformers, but they run super slow!

jokero3answer commented 1 month ago

image

HelloWarcraft commented 1 month ago

No I didn't add the --xformers into environment but I didn't uninstall it either. Torch and I think correct model usage is important. My fp8 model was incorrect. image I had 11gb one and changed it to 17gb one.

That's fine since it works.😄 We can wait for Zhang lvmin to optimize the Forge2.0

HelloWarcraft commented 1 month ago

image

If your device supports CUDA newer than 11.7 then you can use NF4. (Most RTX 3XXX/4XXX GPUs supports this.) Congratulations. Enjoy the speed. In this case, you only need to download the flux1-dev-bnb-nf4.safetensors.

If your device is GPU with GTX 10XX/20XX then your device may not support NF4, please download the flux1-dev-fp8.safetensors.

info from https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981

(bro, why not have a try of Kaggle. Kaggle gives you two T4 graphic card, both are 16G VRAM.

Set-PP commented 1 month ago

Solved the problem, I tried deleting flux1-dev-bnb-nf4.safetensors and download again. and I reduce GPU Weight (MB) to 6000MB . Now It run smoothly. ![Uploading Screenshot 2024-08-12 165419.png…]()

xam0482 commented 1 month ago

我不确认 在更新了那个 python 扩展以后 可以正常了 diffusers-0.29.2 的可能较大

屏幕截图 2024-08-12 200418

xam0482 commented 1 month ago

bnb nf版本模型需要安装依赖 bitsandbytes>=0.43.3

mhioi commented 3 weeks ago

I solved the problem by the steps below: step1: change torch

according to Readme Some other CUDA/Torch Versions: Forge with CUDA 12.1 + Pytorch 2.3.1 <- Recommended Forge with CUDA 12.4 + Pytorch 2.4 <- Fastest, but MSVC may be broken, xformers may not work Forge with CUDA 12.1 + Pytorch 2.1 <- the previously used old environments

pip install torch==2.3.1+cu121 torchvision==0.18.1+cu121 --extra-index-url https://download.pytorch.org/whl/cu121

step2: uninstall xformers

pip uninstall xformers

step3: launch webUI

cd ./stable-diffusion-webui-forge
python launch.py

if you dont want to uninstall xformers, you should do two steps:

pip install torch==2.3.1+cu121 torchvision==0.18.1+cu121 --extra-index-url https://download.pytorch.org/whl/cu121
cd ./stable-diffusion-webui-forge
python launch.py --disable-xformers

If you are a windows user, you can edit run.bat, update.bat or environment.bat in text editor to change the torch version.

the command also works: pip install torch==2.4.0+cu121 torchvision==0.19.0+cu121 --extra-index-url https://download.pytorch.org/whl/cu121

worked for me! thanks!

DKPC69 commented 1 week ago

I Think i found the problem most of yous are having run update.bat then download ae.safetensors, clip_l.safetensors put those in your vae folder then download t5xxl_fp16.safetensors and put in your text encoder folder, make sure you have all three selected in your vae/text encoder in your ui - This all worked for me generating fine now