Open Lopunny1 opened 1 year ago
giving you a bump. i got exact same problem. but seen like other people maybe not using multidiffusion to upscale..so they didnt meet the same issue.
Same issue here been trying to solve it for days. Could the repo be using noise inversion internally without tiled diffusion?
issue is that multidiffusion expects a list of images and wants to take first one to process and the variable is just an image, not a list. i have to check why is that happening.
@vladmandic https://github.com/vladmandic/automatic/commit/567faeb751e4657d7caa7643439247188566a432 seems to be the guilty commit.
It changed init_images
to a single image
while looping over images at line 1055.
Changing it to self.init_images[0] = image
solves the issue, however I don't understand the comment so I'd leave it to @vladmandic :-P
(Does the init_images
list always have only one element?)
this was actually fixed couple days ago. @Nuullll you were on the right track - it should be list, but there were other issues as well.
this was actually fixed couple days ago. @Nuullll you were on the right track - it should be list, but there were other issues as well.
I think yesterday i tried it show another error. Later i up to date the sdnext and try again.
Will post the error if i saw any.
18:48:26-983156 ERROR gradio call: AttributeError ╭───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────╮ │ E:\automatic\modules\call_queue.py:34 in f │ │ │ │ 33 │ │ │ try: │ │ ❱ 34 │ │ │ │ res = func(*args, kwargs) │ │ 35 │ │ │ │ progress.record_results(id_task, res) │ │ │ │ E:\automatic\modules\img2img.py:189 in img2img │ │ │ │ 188 │ │ if processed is None: │ │ ❱ 189 │ │ │ processed = processing.process_images(p) │ │ 190 │ p.close() │ │ │ │ E:\automatic\modules\processing.py:564 in process_images │ │ │ │ 563 │ │ else: │ │ ❱ 564 │ │ │ res = process_images_inner(p) │ │ 565 │ finally: │ │ │ │ E:\automatic\extensions-builtin\sd-webui-controlnet\scripts\batch_hijack.py:42 in processing_process_images_hijack │ │ │ │ 41 │ │ │ # we are not in batch mode, fallback to original function │ │ ❱ 42 │ │ │ return getattr(processing, '__controlnet_original_process_images_inner')(p, │ │ 43 │ │ │ │ E:\automatic\modules\processing.py:686 in process_images_inner │ │ │ │ 685 │ │ │ │ with devices.without_autocast() if devices.unet_needs_upcast else device │ │ ❱ 686 │ │ │ │ │ samples_ddim = p.sample(conditioning=c, unconditional_conditioning=u │ │ 687 │ │ │ │ x_samples_ddim = [decode_first_stage(p.sd_model, samples_ddim[i:i+1].to( │ │ │ │ E:\automatic\modules\processing.py:1140 in sample │ │ │ │ 1139 │ │ │ x = self.initial_noise_multiplier │ │ ❱ 1140 │ │ samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, u │ │ 1141 │ │ if self.mask is not None: │ │ │ │ E:\automatic\extensions-builtin\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py:243 in wrapper │ │ │ │ 242 │ def wrapper(args, kwargs): │ │ ❱ 243 │ │ return fn(*args, kwargs) │ │ 244 │ return wrapper │ │ │ │ E:\automatic\extensions-builtin\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py:243 in wrapper │ │ │ │ 242 │ def wrapper(*args, *kwargs): │ │ ❱ 243 │ │ return fn(args, kwargs) │ │ 244 │ return wrapper │ │ │ │ E:\automatic\extensions-builtin\multidiffusion-upscaler-for-automatic1111\tile_methods\abstractdiffusion.py:577 in │ │ sample_img2img │ │ │ │ 576 │ │ │ # convert to grayscale with PIL │ │ ❱ 577 │ │ │ image = image.convert('L') │ │ 578 │ │ │ np_mask = get_retouch_mask(np.asarray(image), self.noise_inverse_renoise_ker │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ AttributeError: 'numpy.ndarray' object has no attribute 'convert'`
11:32:21-810139 ERROR Exception: 'numpy.ndarray' object has no attribute 'convert'
11:32:21-812952 ERROR Arguments: args=('task(mr64fgasmidfxdh)', 0, '1girl,looking at viewer,a field of
daffodils,blooming flowers,chirping birds,\nupper body,blue sky,\nmedium
breasts,excessively frilled princess dress,draped clothes,jewelry,ornament,flower,lace
trim,long hair,white hair,flowy dress,braided hair,carefree,smiling,\nmasterpiece,best
quality,8k,detailed skin texture,detailed cloth texture,beautiful detailed
face,intricate details,ultra detailed,\nrim lighting,side lighting,cinematic light,ultra
high res,8k uhd,film grain,best shadow,delicate,RAW,', '', [], <PIL.Image.Image image
mode=RGBA size=512x768 at 0x2EEB67C10>, None, None, None, None, None, None, 30, 2, None,
4, 0, 1, True, False, False, 1, 1, 7.5, 1.5, 0.7, 5, 0, 1, 0.75, -1.0, -1.0, 0, 0, 0, 0,
768, 512, 1, 0, 0, 32, 0, None, '', '', '', [], 0, ' CFG Scale
should be 2 or
lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p
style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler:
Euler a, Denoising strength: 0.8
Will upscale the image by the selected scale factor; use width and height sliders to set tile size
', 64, 0, 2, 0, '', [], 0, '', [], 0, '', [], False, True, False, False, False, False, 0, None, None, False, None, None, False, None, None, False, 50, 'Will upscale the image depending on the selected target size type
', 512, 0, 8, 32, 64, 0.35, 16, 1, True, 0, False, 4, 0, 0, 2048, 2048, 2, 0, 4, 512, 512, True, 'None', 'None', 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x2f6f97ee0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x2eec372e0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x2f75fb100>, True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'Lanczos', 2, True, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, 512, 64, True, True, True, False, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1, 1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1, 1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1, 1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1, 1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\nONL YFACE:1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0\nCLOTHES: 0,0,0,1,1,0,0.2,0.8,0.8,1,1,0.2,0,0,0,0,0\nDEFACE:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1\nCOL OR-STYLE: 1,0,0,0,0,0,0,0,0,0,0,0.8,1,1,1,1,1\nBACKGROUND: 1:1,1,1,1,1,1,1,1,0.2,0,0,0.8,1,1,1,0,0\nACT: 1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,0,0\n', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02, OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1, 1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1, 1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1, 1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1, 1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\nONL YFACE:1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0\nCLOTHES: 0,0,0,1,1,0,0.2,0.8,0.8,1,1,0.2,0,0,0,0,0\nDEFACE:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1\nCOL OR-STYLE: 1,0,0,0,0,0,0,0,0,0,0,0.8,1,1,1,1,1\nBACKGROUND: 1:1,1,1,1,1,1,1,1,0.2,0,0,0.8,1,1,1,0,0\nACT: 1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,0,0\n', False, False) kwargs={} 11:32:21-821073 ERROR gradio call: AttributeError ╭────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────╮ │ /Volumes/ZY/AIdraw/automatic/modules/call_queue.py:34 in f │ │ │ │ 33 │ │ │ try: │ │ ❱ 34 │ │ │ │ res = func(args, kwargs) │ │ 35 │ │ │ │ progress.record_results(id_task, res) │ │ │ │ /Volumes/ZY/AIdraw/automatic/modules/img2img.py:222 in img2img │ │ │ │ 221 │ │ if processed is None: │ │ ❱ 222 │ │ │ processed = processing.process_images(p) │ │ 223 │ p.close() │ │ │ │ /Volumes/ZY/AIdraw/automatic/modules/processing.py:692 in process_images │ │ │ │ 691 │ │ │ with context_hypertile_vae(p), context_hypertile_unet(p): │ │ ❱ 692 │ │ │ │ res = process_images_inner(p) │ │ 693 │ finally: │ │ │ │ /Volumes/ZY/AIdraw/automatic/extensions-builtin/sd-webui-controlnet/scripts/batch_hijack.py:42 in │ │ processing_process_images_hijack │ │ │ │ 41 │ │ │ # we are not in batch mode, fallback to original function │ │ ❱ 42 │ │ │ return getattr(processing, '__controlnet_original_process_images_inner')(p, │ │ 43 │ │ │ │ /Volumes/ZY/AIdraw/automatic/modules/processing.py:827 in process_images_inner │ │ │ │ 826 │ │ │ │ with devices.without_autocast() if devices.unet_needs_upcast else device │ │ ❱ 827 │ │ │ │ │ samples_ddim = p.sample(conditioning=c, unconditional_conditioning=u │ │ 828 │ │ │ │ x_samples_ddim = [decode_first_stage(p.sd_model, samples_ddim[i:i+1].to( │ │ │ │ /Volumes/ZY/AIdraw/automatic/modules/processing.py:1299 in sample │ │ │ │ 1298 │ │ x = self.initial_noise_multiplier │ │ ❱ 1299 │ │ samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, u │ │ 1300 │ │ if self.mask is not None: │ │ │ │ /Volumes/ZY/AIdraw/automatic/extensions/multidiffusion-upscaler-for-automatic1111/tile_utils/utils.py:249 in │ │ wrapper │ │ │ │ 248 │ def wrapper(args, kwargs): │ │ ❱ 249 │ │ return fn(*args, kwargs) │ │ 250 │ return wrapper │ │ │ │ /Volumes/ZY/AIdraw/automatic/extensions/multidiffusion-upscaler-for-automatic1111/tile_utils/utils.py:249 in │ │ wrapper │ │ │ │ 248 │ def wrapper(*args, *kwargs): │ │ ❱ 249 │ │ return fn(args, kwargs) │ │ 250 │ return wrapper │ │ │ │ /Volumes/ZY/AIdraw/automatic/extensions/multidiffusion-upscaler-for-automatic1111/tile_methods/abstractdiffus │ │ ion.py:615 in sample_img2img │ │ │ │ 614 │ │ │ # convert to grayscale with PIL │ │ ❱ 615 │ │ │ image = image.convert('L') │ │ 616 │ │ │ np_mask = get_retouch_mask(np.asarray(image), self.noise_inverse_renoise_ker │ ╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ AttributeError: 'numpy.ndarray' object has no attribute 'convert'Not even remotely same issue. Please create a new issue and not add comments to issues closer months ago.
@edifierx666
AttributeError: 'numpy.ndarray' object has no attribute 'convert'
I am not a python programmer, but after some code exploring managed to solve the issue: Open file "extensions/multidiffusion-upscaler-for-automatic1111/tile_methods/abstractdiffusion.py", descend to line 613 and replace "image = p.init_images[0]" with "image = p.init_images_original_md[0]"
has this issue been reported upstream? seems the right place to solve it is in the extension itself.
Issue Description
Multidiffusion's Noise Inversion has not been working in the last few days. Any attempt of using it results in a error. Simply enabling tiled diffusion and noise inversion in the img2img settings and using on any image causes this error. I have disabled all other extensions and adjusted the Noise Inversions settings but all result in the same error. Even the default settings do. Here is a picture of the settings so you can see what I'm refering to.
I have downloaded the loatest version of the normal webui from https://github.com/AUTOMATIC1111/stable-diffusion-webui and can confirm that it does NOT have this issue, so it seems to be something specific to automatic and doesn't seem to be an issue with the extension itself. When I try and revert my /modules folder to the one from around commit d962383 it seems to work so a bit after that is when I think the issue began.
Here is my console when the error occurs.
Version Platform Description
Python 3.10.0 on Windows Version: 7b35f78d Thu Jul 27 20:07:27 2023 -0400 nVidia CUDA toolkit detected Torch 2.0.0+cu118 Torch backend: nVidia CUDA 11.8 cuDNN 8700 Torch detected GPU: NVIDIA GeForce RTX 3080 VRAM 10240 Arch (8, 6) Cores 68 Enabled extensions-builtin: ['LDSR', 'Lora', 'multidiffusion-upscaler-for-automatic1111', 'ScuNET', 'SwinIR'] Enabled extensions: [] Google Chrome Version 115.0.5790.110 (Official Build) (64-bit)
URL link of the extension
https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111/tree/f9f8073e64f4e682838f255215039ba7884553bf
URL link of the issue reported in the extension repository
No response
Acknowledgements