lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
7.39k stars 715 forks source link

TypeError: 'NoneType' object is not iterable #950

Closed Whitesilence1 closed 1 month ago

Whitesilence1 commented 1 month ago

Can't run any generation after updating in yesterday, please help.

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f1.0.2v1.10.1-previous-168-gf743fbff Commit hash: f743fbff83c7db0bf0957ab9718f8d42a47eb35e Launching Web UI with arguments: Total VRAM 4096 MB, total RAM 15741 MB pytorch version: 2.1.2+cu121 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce GTX 1650 Ti : native VAE dtype preferences: [torch.float32] -> torch.float32 CUDA Stream Activated: False E:\Forge\system\python\lib\site-packages\transformers\utils\hub.py:127: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead. warnings.warn( Using pytorch cross attention Using pytorch attention for VAE ControlNet preprocessor location: E:\Forge\webui\models\ControlNetPreprocessor 2024-08-07 13:38:15,629 - ControlNet - INFO - ControlNet UI callback registered. Loading weights [4c276562ac] from E:\Forge\webui\models\Stable-diffusion\HassakuSnow.fp16.safetensors Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). StateDict Keys: {'unet': 1680, 'vae': 248, 'text_encoder': 197, 'text_encoder_2': 518, 'ignore': 0} Startup time: 38.9s (prepare environment: 12.4s, launcher: 5.2s, import torch: 7.0s, initialize shared: 0.3s, other imports: 4.3s, list SD models: 0.1s, load scripts: 3.8s, create ui: 2.9s, gradio launch: 3.1s). Working with z of shape (1, 4, 32, 32) = 4096 dimensions. K-Model Created: {'storage_dtype': torch.float16, 'computation_dtype': torch.float32, 'manual_cast': True} Model loaded in 170.2s (calculate hash: 0.1s, load weights from disk: 3.0s, forge model load: 166.8s, load VAE: 0.2s). To load target model ModuleDict Begin to load 1 model Moving model(s) has taken 0.02 seconds Traceback (most recent call last): File "E:\Forge\webui\modules_forge\main_thread.py", line 37, in loop task.work() File "E:\Forge\webui\modules_forge\main_thread.py", line 26, in work self.result = self.func(*self.args, self.kwargs) File "E:\Forge\webui\modules\txt2img.py", line 110, in txt2img_function processed = processing.process_images(p) File "E:\Forge\webui\modules\processing.py", line 790, in process_images res = process_images_inner(p) File "E:\Forge\webui\modules\processing.py", line 912, in process_images_inner p.setup_conds() File "E:\Forge\webui\modules\processing.py", line 1497, in setup_conds super().setup_conds() File "E:\Forge\webui\modules\processing.py", line 486, in setup_conds self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data) File "E:\Forge\webui\modules\processing.py", line 461, in get_conds_with_caching cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling) File "E:\Forge\webui\modules\prompt_parser.py", line 188, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "E:\Forge\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "E:\Forge\webui\backend\diffusion_engine\sdxl.py", line 85, in get_learned_conditioning cond_l = self.text_processing_engine_l(prompt) File "E:\Forge\webui\backend\text_processing\classic_engine.py", line 264, in call z = self.process_tokens(tokens, multipliers) File "E:\Forge\webui\backend\text_processing\classic_engine.py", line 297, in process_tokens z = self.encode_with_transformers(tokens) File "E:\Forge\webui\backend\text_processing\classic_engine.py", line 127, in encode_with_transformers outputs = self.text_encoder.transformer(tokens, output_hidden_states=True) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "E:\Forge\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 986, in forward return self.text_model( File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "E:\Forge\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 877, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "E:\Forge\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 225, in forward inputs_embeds = self.token_embedding(input_ids) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "E:\Forge\webui\backend\text_processing\classic_engine.py", line 33, in forward inputs_embeds = self.wrapped(input_ids) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "E:\Forge\webui\backend\operations.py", line 273, in forward return super().forward(x) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "E:\Forge\system\python\lib\site-packages\torch\nn\functional.py", line 2233, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDAindex_select) Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDAindex_select) Error completing request Arguments: ('task(ynlwh0culivq5dx)', <gradio.route_utils.Request object at 0x0000017A3B77E500>, '', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', None, 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option=<HiResFixOption.BOTH: 'Both'>, enabled=False, module='None', model='None', weight=1.0, image=None, image_fg=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option=<HiResFixOption.BOTH: 'Both'>, enabled=False, module='None', model='None', weight=1.0, image=None, image_fg=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 3, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', '', 0, '', '', 0, '', '', True, False, False, False, False, False, False, 0, False) {} Traceback (most recent call last): File "E:\Forge\webui\modules\call_queue.py", line 74, in f res = list(func(*args, **kwargs)) TypeError: 'NoneType' object is not iterable

Dadpoole commented 1 month ago

Still been having same issue after past 3 or 4 updates.

mart-hill commented 1 month ago

I used git checkout cfe91791025511af46e8f5e08a3b98656f1a032d as a 'workaround' for now, until things settle. 🙂

atson100 commented 1 month ago

git checkout cfe91791025511af46e8f5e08a3b98656f1a032d

Dude, thanks a lot! I was depressed because the error has not been fixed by any update so far and if it persists I would have to reinstall everything. But your solution made everything back to pre-update without any pain! I learned something more about Github too. Thanks a lot!

Dreamz-Dziner commented 1 month ago

I used git checkout cfe91791025511af46e8f5e08a3b98656f1a032d as a 'workaround' for now, until things settle. 🙂

Thanks man. I've been trying to update forge with various commits since a week but was always getting this stupid error. The above one worked and my generation speed also seems super fast:-)

mart-hill commented 1 month ago

There's a small bug with the image button in the cfe91791025511af46e8f5e08a3b98656f1a032d commit:

TypeError: tuple indices must be integers or slices, not str
tuple indices must be integers or slices, not str

...and then a wall of text, but the result of the 'enhancing' is being saved, so it just spooks the user with it. 🙂

When the recent Forge updates make things stable, you have to use git checkout main to "return" to the branch with updates possible, because now we "froze" our installations with that checkout [commit hash number]. 🙂

lllyasviel commented 1 month ago

update and test again

Whitesilence1 commented 1 month ago

update and test again

Nope, still getting error

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f1.0.2v1.10.1-previous-189-ge396307e Commit hash: e396307e9dd2d6654dd1777a7296a3912722e807 Fetching updates for huggingface_guess... Checking out commit for huggingface_guess with hash: 3f96b28763515dbe609792135df3615a440c66dc... Previous HEAD position was aebabb9 Update init.py HEAD is now at 3f96b28 Update init.py Installing requirements Launching Web UI with arguments: Total VRAM 4096 MB, total RAM 15741 MB pytorch version: 2.1.2+cu121 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce GTX 1650 Ti : native VAE dtype preferences: [torch.float32] -> torch.float32 CUDA Stream Activated: False E:\Forge\system\python\lib\site-packages\transformers\utils\hub.py:127: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead. warnings.warn( Using pytorch cross attention Using pytorch attention for VAE ControlNet preprocessor location: E:\Forge\webui\models\ControlNetPreprocessor Tag Autocomplete: Cannot reload embeddings instantly: module 'modules.sd_hijack' has no attribute 'model_hijack' Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu. [-] ADetailer initialized. version: 24.8.0, num models: 11 2024-08-08 12:41:12,532 - ControlNet - INFO - ControlNet UI callback registered. Loading weights [671e388fb9] from E:\Forge\webui\models\Stable-diffusion\RainyDayMix.fp16.safetensors Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). StateDict Keys: {'unet': 1680, 'vae': 248, 'text_encoder': 197, 'text_encoder_2': 518, 'ignore': 0} Startup time: 80.2s (prepare environment: 52.1s, launcher: 5.2s, import torch: 6.3s, initialize shared: 0.3s, other imports: 3.0s, list SD models: 0.1s, load scripts: 5.4s, create ui: 3.4s, gradio launch: 4.2s). Working with z of shape (1, 4, 32, 32) = 4096 dimensions. K-Model Created: {'storage_dtype': torch.float16, 'computation_dtype': torch.float32, 'manual_cast': True} Loading VAE weights specified in settings: E:\Forge\webui\models\VAE\sdxl_vae.safetensors tag_autocomplete_helper: Old webui version or unrecognized model shape, using fallback for embedding completion. Model loaded in 178.6s (calculate hash: 0.3s, load weights from disk: 3.9s, forge model load: 166.0s, load VAE: 8.5s). To load target model ModuleDict Begin to load 1 model00:00, ?it/s] Moving model(s) has taken 0.02 seconds Traceback (most recent call last): File "E:\Forge\webui\modules_forge\main_thread.py", line 37, in loop task.work() File "E:\Forge\webui\modules_forge\main_thread.py", line 26, in work self.result = self.func(*self.args, self.kwargs) File "E:\Forge\webui\modules\txt2img.py", line 111, in txt2img_function processed = processing.process_images(p) File "E:\Forge\webui\modules\processing.py", line 802, in process_images res = process_images_inner(p) File "E:\Forge\webui\modules\processing.py", line 924, in process_images_inner p.setup_conds() File "E:\Forge\webui\modules\processing.py", line 1509, in setup_conds super().setup_conds() File "E:\Forge\webui\modules\processing.py", line 491, in setup_conds self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data) File "E:\Forge\webui\modules\processing.py", line 462, in get_conds_with_caching cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling) File "E:\Forge\webui\modules\prompt_parser.py", line 189, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "E:\Forge\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "E:\Forge\webui\backend\diffusion_engine\sdxl.py", line 85, in get_learned_conditioning cond_l = self.text_processing_engine_l(prompt) File "E:\Forge\webui\backend\text_processing\classic_engine.py", line 264, in call z = self.process_tokens(tokens, multipliers) File "E:\Forge\webui\backend\text_processing\classic_engine.py", line 297, in process_tokens z = self.encode_with_transformers(tokens) File "E:\Forge\webui\backend\text_processing\classic_engine.py", line 127, in encode_with_transformers outputs = self.text_encoder.transformer(tokens, output_hidden_states=True) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "E:\Forge\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 986, in forward return self.text_model( File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "E:\Forge\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 890, in forward encoder_outputs = self.encoder( File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "E:\Forge\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 813, in forward layer_outputs = encoder_layer( File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "E:\Forge\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 547, in forward hidden_states = self.layer_norm1(hidden_states) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "E:\Forge\webui\backend\operations.py", line 256, in forward return super().forward(x) File "E:\Forge\system\python\lib\site-packages\torch\nn\modules\normalization.py", line 196, in forward return F.layer_norm( File "E:\Forge\system\python\lib\site-packages\torch\nn\functional.py", line 2543, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' "LayerNormKernelImpl" not implemented for 'Half' Error completing request Arguments: ('task(kfhrpnv0o7a3o9h)', <gradio.route_utils.Request object at 0x000001FC8CB6A9E0>, 'score_9, score_7_up, dutch angle, 1girl, medium hair, white hair, two side up, hair tie, hair over one eye, one eye covered, red eyes, white pupils, mole under eye, (bags under eyes:0.9), black hoodie, center opening, zipper pull tab, medium breasts, no bra, collarbone, black choker, cleavage, bottomless, (striped thighhighs:1.1), black thighhighs, white thighhighs, shiny clothes, thick thighs, standing, contrapposto, hands in pockets, expressionless, parted lips, sharp teeth, red background, gradient background,', '', [], 1, 1, 6, 3.5, 1216, 832, True, 0.3, 1.35, '4x_foolhardy_Remacri', 0, 0, 0, 'Use same checkpoint', 'DPM++ 2M', 'Use same scheduler', '', '', [], 0, 25, 'Euler SMEA Dy', 'Align Your Steps', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 3, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {} Traceback (most recent call last): File "E:\Forge\webui\modules\call_queue.py", line 74, in f res = list(func(*args, **kwargs)) TypeError: 'NoneType' object is not iterable

TheTechGuyStudio commented 1 month ago

update and test again

Thank you so much llyasviel I tested it, I had the same issue, its solved on this (latest for now) commit: 20e1ba4a82529f71a6524b537806f84bc6195cee

also it got super fast, I had avg of 1~2sec/it with SD1.5 but now its 1.06it/sec 😃 [LOW_VRAM]:

[Memory Management] Model Memory (MB) =  319.11416244506836
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  1909.1192359924316
Moving model(s) has taken 0.85 seconds
Total progress: 100%|███████████████████████████████████████████████████████| 20/20 [00:19<00:00,  1.03it/s]
Total progress: 100%|███████████████████████████████████████████████████████| 20/20 [00:19<00:00,  1.16it/s]

for SDXL I had 3~4sec/it now I got 2.83sec/it (these tests made for a 512x512 image):

[Memory Management] Current Free GPU Memory (MB) =  3244.02978515625
[Memory Management] Model Memory (MB) =  319.11416244506836
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  1900.9156227111816
Moving model(s) has taken 0.72 seconds
Total progress: 100%|███████████████████████████████████████████████████████| 20/20 [00:55<00:00,  2.79s/it]
Total progress: 100%|███████████████████████████████████████████████████████| 20/20 [00:55<00:00,  2.83s/it]

btw, I saw some Flux and SD3 files and settings in new update, there is no model (of SD3 and/or Flux AFAIK) available for Stable Diffusion, so what is that about? is it for future support?

Whitesilence1 commented 1 month ago

Yes, it's working for me too now, on commit 87b0205d87ecb0f338156ded4e5cfce127acffc1. Thank you for your work, llyasviel. Closing the issue.

physeo commented 1 week ago

Running into this same issue as of the latest commit.

webui_forge_cu121_torch231\webui\modules\txt2img.py", line 95, in txt2img_upscale_function fake_image.already_saved_as = image["name"].rsplit('?', 1)[0] TypeError: tuple indices must be integers or slices, not str tuple indices must be integers or slices, not str *** Error completing request