Closed Set-PP closed 1 month ago
same here
when I try to make image using flux1-dev-bnb-nf4.safetensors It show TypeError: 'NoneType' object is not iterable .
I am using webui webui_forge_cu124_torch24 core i 9 32 GB RAM with Rtx3070ti graphic.
Probably because you are using xformer, try removing it
when I try to make image using flux1-dev-bnb-nf4.safetensors It show TypeError: 'NoneType' object is not iterable . I am using webui webui_forge_cu124_torch24 core i 9 32 GB RAM with Rtx3070ti graphic.
Probably because you are using xformer, try removing it
I also have same issue, I was running with xformers but removing it not solve the problem. Still same error.
Macbook Air M1 with 16 GB RAM. Auto1111 web_UI is working for me. I run forge web ui with "./webui.sh" command, web ui is opened but generation always failed with the same error. I'm too noob to understand if I did something wrong. Here what I have with all default settings and with just 1 word in promt. Last login: Mon Aug 12 10:02:14 on ttys015 user@MacBook-Air stable-diffusion-webui-forge % ./webui.sh
################################################################ Install script for stable-diffusion + Web UI Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer. ################################################################
################################################################ Running on user user ################################################################
################################################################ Repo already cloned, using it as install directory ################################################################
################################################################ Create and activate python venv ################################################################
################################################################
Launching launch.py...
################################################################
Python 3.10.14 (main, Mar 19 2024, 21:46:16) [Clang 15.0.0 (clang-1500.3.9.4)]
Version: f2.0.1v1.10.1-previous-247-g9f6d263c3
Commit hash: 9f6d263c3f3e07f2e7874263d36b528b887b3b26
Installing requirements
Legacy Preprocessor init warning: Unable to install insightface automatically. Please try run pip install insightface
manually.
Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
Total VRAM 16384 MB, total RAM 16384 MB
pytorch version: 2.3.1
Set vram state to: SHARED
Device: mps
VAE dtype preferences: [torch.float32] -> torch.float32
CUDA Using Stream: False
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Using sub quadratic optimization for cross attention
Using split attention for VAE
ControlNet preprocessor location: /Volumes/AlphaBoss/stable-diffusion-webui-forge/models/ControlNetPreprocessor
2024-08-12 10:08:42,459 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': '/Volumes/AlphaBoss/stable-diffusion-webui-forge/models/Stable-diffusion/flux1-dev-bnb-nf4.safetensors', 'hash': '0184473b'}, 'vae_filename': None, 'unet_storage_dtype': None}
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 10.6s (prepare environment: 2.3s, launcher: 1.9s, import torch: 2.0s, initialize shared: 0.2s, other imports: 1.1s, load scripts: 0.9s, create ui: 1.1s, gradio launch: 0.9s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
Loading Model: {'checkpoint_info': {'filename': '/Volumes/AlphaBoss/stable-diffusion-webui-forge/models/Stable-diffusion/flux1-dev-bnb-nf4.safetensors', 'hash': '0184473b'}, 'vae_filename': None, 'unet_storage_dtype': None}
StateDict Keys: {'transformer': 2350, 'vae': 244, 'text_encoder': 198, 'text_encoder_2': 220, 'ignore': 0}
Using Detected T5 Data Type: torch.float8_e4m3fn
Using Detected UNet Type: nf4
Using pre-quant state dict!
Working with z of shape (1, 16, 32, 32) = 16384 dimensions.
K-Model Created: {'storage_dtype': 'nf4', 'computation_dtype': torch.bfloat16}
Model loaded in 10.4s (unload existing model: 0.2s, load state dict: 0.2s, forge model load: 10.0s).
Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored.
To load target model ModuleDict
Begin to load 1 model
Moving model(s) has taken 0.01 seconds
Distilled CFG Scale: 3.5
To load target model KModel
Begin to load 1 model
Moving model(s) has taken 52.71 seconds
0%| | 0/20 [00:00<?, ?it/s]huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
tokenizers
before the fork if possibleI solved the problem by the steps below: step1: change torch
according to Readme
Some other CUDA/Torch Versions:
Forge with CUDA 12.1 + Pytorch 2.3.1 <- Recommended
Forge with CUDA 12.4 + Pytorch 2.4 <- Fastest, but MSVC may be broken, xformers may not work
Forge with CUDA 12.1 + Pytorch 2.1 <- the previously used old environments
pip install torch==2.3.1+cu121 torchvision==0.18.1+cu121 --extra-index-url https://download.pytorch.org/whl/cu121
step2: uninstall xformers
pip uninstall xformers
step3: launch webUI
cd ./stable-diffusion-webui-forge
python launch.py
if you dont want to uninstall xformers, you should do two steps:
pip install torch==2.3.1+cu121 torchvision==0.18.1+cu121 --extra-index-url https://download.pytorch.org/whl/cu121
cd ./stable-diffusion-webui-forge
python launch.py --disable-xformers
If you are a windows user, you can edit run.bat, update.bat or environment.bat
in text editor to change the torch version.
the command also works: pip install torch==2.4.0+cu121 torchvision==0.19.0+cu121 --extra-index-url https://download.pytorch.org/whl/cu121
Doesn't --disable-xformers cause a serious drop in raw speed? @HelloWarcraft
Doesn't --disable-xformers cause a serious drop in raw speed? @HelloWarcraft
Xformers
really does well in accelating the process of image generation, but it will cause error here. That is to say, you can not generate image successfully when using xformers now.
I think the author would solve the problem soon.
I just changed the model I've used and updated Torch 2.3.1+cu118 to 2.3.1+cu121 but didn't disable xformers and it worked.
I just changed the model I've used and updated Torch 2.3.1+cu118 to 2.3.1+cu121 but didn't disable xformers and it worked.
Nice
Have you installed xformers in you environment?
I tried python launch.py --xformers
with torch==2.3.1+cu121
just now, and got an error of NoneTypeError
. So I thought changing the toch from cu124
to cu121
and removing or disabling xformers might be both important.
I just changed the model I've used and updated Torch 2.3.1+cu118 to 2.3.1+cu121 but didn't disable xformers and it worked.
Nice Have you installed xformers in you environment? I tried
python launch.py --xformers
withtorch==2.3.1+cu121
just now, and got an error ofNoneTypeError
. So I thought changing the toch fromcu124
tocu121
and removing or disabling xformers might be both important.
No I didn't add the --xformers into environment but I didn't uninstall it either. Torch and I think correct model usage is important. My fp8 model was incorrect. I had 11gb one and changed it to 17gb one.
The other three work except for xformers, but they run super slow!
No I didn't add the --xformers into environment but I didn't uninstall it either. Torch and I think correct model usage is important. My fp8 model was incorrect. I had 11gb one and changed it to 17gb one.
That's fine since it works.😄 We can wait for Zhang lvmin to optimize the Forge2.0
If your device supports CUDA newer than 11.7 then you can use NF4. (Most RTX 3XXX/4XXX GPUs supports this.) Congratulations. Enjoy the speed. In this case, you only need to download the flux1-dev-bnb-nf4.safetensors.
If your device is GPU with GTX 10XX/20XX then your device may not support NF4, please download the flux1-dev-fp8.safetensors.
info from https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981
(bro, why not have a try of Kaggle. Kaggle gives you two T4 graphic card, both are 16G VRAM.
Solved the problem, I tried deleting flux1-dev-bnb-nf4.safetensors and download again. and I reduce GPU Weight (MB) to 6000MB . Now It run smoothly. ![Uploading Screenshot 2024-08-12 165419.png…]()
我不确认 在更新了那个 python 扩展以后 可以正常了 diffusers-0.29.2 的可能较大
bnb nf版本模型需要安装依赖 bitsandbytes>=0.43.3
I solved the problem by the steps below: step1: change torch
according to Readme Some other CUDA/Torch Versions: Forge with CUDA 12.1 + Pytorch 2.3.1 <- Recommended Forge with CUDA 12.4 + Pytorch 2.4 <- Fastest, but MSVC may be broken, xformers may not work Forge with CUDA 12.1 + Pytorch 2.1 <- the previously used old environments
pip install torch==2.3.1+cu121 torchvision==0.18.1+cu121 --extra-index-url https://download.pytorch.org/whl/cu121
step2: uninstall xformers
pip uninstall xformers
step3: launch webUI
cd ./stable-diffusion-webui-forge python launch.py
if you dont want to uninstall xformers, you should do two steps:
pip install torch==2.3.1+cu121 torchvision==0.18.1+cu121 --extra-index-url https://download.pytorch.org/whl/cu121 cd ./stable-diffusion-webui-forge python launch.py --disable-xformers
If you are a windows user, you can edit
run.bat, update.bat or environment.bat
in text editor to change the torch version.the command also works:
pip install torch==2.4.0+cu121 torchvision==0.19.0+cu121 --extra-index-url https://download.pytorch.org/whl/cu121
worked for me! thanks!
I Think i found the problem most of yous are having run update.bat then download ae.safetensors, clip_l.safetensors put those in your vae folder then download t5xxl_fp16.safetensors and put in your text encoder folder, make sure you have all three selected in your vae/text encoder in your ui - This all worked for me generating fine now
when I try to make image using flux1-dev-bnb-nf4.safetensors It show TypeError: 'NoneType' object is not iterable .
I am using webui webui_forge_cu124_torch24
core i 9 32 GB RAM with Rtx3070ti graphic.