Open f-rank opened 1 year ago
Threw another new variation before the above console log.
WARNING:py.warnings:D:\WORK\conda_envs\automatic\stable-diffusion-webui\extensions\sd_dreambooth_extension\reallysafe.py:36: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return TypedStorage()
Disable Apply color correction to img2img results to match original colors. Fixes it. https://github.com/Coyote-A/ultimate-upscale-for-automatic1111/issues/84#issuecomment-1566441661
It's cool you solved your problem. If that can help another one, I have received this type of error on one upscaler that use the alpha channel and require the 3 colors + transparency. In your case it's possible you give it an image that include alpha channel and throw error for that. If you need an upscaler that use the 4 channels to upscale also transparency (I never tested it) you can find one here https://upscale.wiki/wiki/Model_Database it's a big list of upscalers with links to download them. You can search firealpha for one that use transparency.
Today I started getting this : "RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 192, 192] to have 3 channels, but got 4 channels instead", worked ok until yesterday morning though.
Edit: This happens after rendering two tiles, so somehow it changes in the middle of the process. On the third it errors out.
version: v1.2.1 • python: 3.10.6 • torch: 2.0.0+cu118 • xformers: 0.0.17 • gradio: 3.29.0 • checkpoint: [3a05be6653]
ultimate upscale version: 756bb505 (Fri May 5 00:22:21 2023)
console log: Tile size: 512x512 Tiles amount: 12 Grid: 3x4 Redraw enabled: True Seams fix mode: NONE Loading model from cache: controlnet11Models_tileE [e47b23a8] Loading preprocessor: tile_resample preprocessor resolution = 64 100%|███████████████████████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 4.16it/s] Loading model from cache: controlnet11Models_tileE [e47b23a8] Loading preprocessor: tile_resample preprocessor resolution = 64 100%|███████████████████████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 4.15it/s] Error completing request Arguments: ('task(wy71uvzf7egsw1v)', 0, '(Photo:1.3) of a large wave in the middle of the ocean, aerial viewyoji shinkawa, ( ( mads berg ) ), beautiful water, photo from above, dangerous & powerful creature, water splashes cascades, whirlwind, nature, dark green water posing for a photo at Dubai street, fuck you, Highly Detailed', '3d render, 3D, backrooms, NG_DeepNegative_V1_75T, cartoon, 3d, ((disfigured)), ((bad art)), ((deformed)), ((poorly drawn)), ((extra limbs)), ((b&w)), Amateur, Low rated, Phone, Wedding, Frame, Painting, tumblr, watermark, signature', [], <PIL.Image.Image image mode=RGBA size=896x576 at 0x2226E692DD0>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.2, -1.0, -1.0, 0, 0, 0, False, 0, 576, 896, 1, 0, 0, 32, 0, '', '', '', [], 24, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 1536, 96, True, True, True, False, <controlnet.py.UiControlNetUnit object at 0x000002226DF77130>, <controlnet.py.UiControlNetUnit object at 0x0000022274EE5D50>, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], '', False, None, '', 'outputs', '
\n \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 256, 0, 1, 0, 0.25, 4, 0.5, 'Linear', 'None', 4, 0.09, True, 1, 0, 7, False, False, 512, 0.75, 0, 0.1, 0, 0.06, 1, False, 'CFG Scale
should be 2 or lower.Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8
', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', 'Will upscale the image by the selected scale factor; use width and height sliders to set tile size
', 64, 0, 2, 'Illustration', 'svg', True, True, False, 0.5, True, 16, True, 16, '', '', 24, '24', 'hh:mm:ss', 'hh:mm:ss', False, False, '', 1, 10, 1, False, 1, 0, 0, False, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, 0, True, 384, 384, False, 4, True, True, False, False, 'Loops: The number of times the script will inference your image and increase the resolution in increments. The amount the resolution is increased each loop is determined by this number and the maximum image width/height. The more loops, the more chances of your image picking up more detail, but also artifacts. 4 to 10 is what I find to work best, but you may like more or less.
', 4, 1, 1024, 1024, 'None', 'None', False, 'None', 1, 1, 1, 1, 1, None, None, False, None, None, False, 50, 0, 0, 512, 512, False, True, False, False, 0, 1, False, 1, True, True, False, False, ['left-right', 'red-cyan-anaglyph'], 2.5, 'polylines_sharp', 0, False, False, False, False, False, False, 'u2net', False, True, False, 'Denoise change: This setting will increase or decrease the denoising strength every loop. A higher value will increase the denoising strength, while a lower value will decrease it. A setting of 1 keeps the denoising strength as it is set on the img2img settings.
Adaptive change: This setting changes the amount of resolution increase per loop, keeping the changes from being linear. The higher the value the more significant the resolution changes toward the end of the looping.
Maximum Image Width/Height: These parameters set the maximum width and height of the final image. Always start with an image smaller than these dimensions. The smaller you start, the more impressive the results. I usually start at either 340x512 or 512x768
Detail, Blur, Smooth, Contour: These parameters are checkboxes that apply a PIL Image Filter to the final image.
Sharpness, Brightness, Color, Contrast: These parameters are sliders that adjust the sharpness, brightness, color, and contrast of the image. 1 will result in no adjustments, less than one reduces these settings for the final image and greater than 1 increases these settings.
Img2Img Settings: I recommend creating an image with txt2img and then sending the result to img2img with the prompt and settings. For this script I use these settings..
Resize mode - Crop and resize
Sampling method - DDIM
Sampling steps - 30
Width/Height - 340x512 or 512x768. I’d try to keep to the aspect ratio of the original image but these can be set lower than the resolution of the original image
CFG Scale - 6 to 8
Denoising strength - 0.2 to 0.4 is usual. The lower you go, the less change between loops. The higher you go the less the end result will look like the original image.
Seed - This doesn’t matter too much, I usually keep it at -1
Will upscale the image depending on the selected target size type
', 512, 0, 8, 32, 64, 0.35, 32, 10, True, 0, False, 8, 0, 2, 2048, 2048, 2) {} Traceback (most recent call last): File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, *kwargs) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\img2img.py", line 180, in img2img processed = modules.scripts.scripts_img2img.run(p, args) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\scripts.py", line 408, in run processed = script.run(p, script_args) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 553, in run upscaler.process() File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 136, in process self.image = self.redraw.start(self.p, self.image, self.rows, self.cols) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 243, in start return self.linear_process(p, image, rows, cols) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 178, in linear_process processed = processing.process_images(p) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\processing.py", line 526, in process_images res = process_images_inner(p) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\processing.py", line 727, in process_images_inner image = apply_overlay(image, p.paste_to, i, p.overlay_images) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\processing.py", line 70, in apply_overlay image = images.resize_image(1, image, w, h) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\images.py", line 287, in resize_image resized = resize(im, src_w, src_h) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\images.py", line 270, in resize im = upscaler.scaler.upscale(im, scale, upscaler.data_path) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\upscaler.py", line 63, in upscale img = self.do_upscale(img, selected_model) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\esrgan_model.py", line 154, in do_upscale img = esrgan_upscale(model, img) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\esrgan_model.py", line 228, in esrgan_upscale output = upscale_without_tiling(model, tile) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\esrgan_model.py", line 207, in upscale_without_tiling output = model(img) File "d:\WORK\conda_envs\automatic\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\modules\esrgan_model_arch.py", line 62, in forward return self.model(feat) File "d:\WORK\conda_envs\automatic\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "d:\WORK\conda_envs\automatic\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward input = module(input) File "d:\WORK\conda_envs\automatic\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "D:\WORK\conda_envs\automatic\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 376, in lora_Conv2d_forward return torch.nn.Conv2d_forward_before_lora(self, input) File "d:\WORK\conda_envs\automatic\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "d:\WORK\conda_envs\automatic\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 192, 192] to have 3 channels, but got 4 channels instead