AUTOMATIC1111 / stable-diffusion-webui-tensorrt

MIT License
310 stars 20 forks source link

Input shape must be divisible by 64 in both dimensions #77

Open NoteToSelfFindGoodNickname opened 9 months ago

NoteToSelfFindGoodNickname commented 9 months ago

Perhaps a problem that only occurs with Tensor RT?

FABRIC v0.6.3

  File "C:\Users\UserName\sd\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 84, in forward
    raise ValueError(
ValueError: Input shape must be divisible by 64 in both dimensions.

Problem 1:


[FABRIC] Restoring original U-Net forward pass [FABRIC] Patching U-Net forward pass... (2 likes, 0 dislikes) 0%| | 0/16 [00:01<?, ?it/s] Error completing request Arguments: ('task(08bzqsptixjzpz3)', 0, 'myprompt', 'myneg', [], <PIL.Image.Image image mode=RGBA size=848x1280 at 0x2E2CC5F7DC0>, None, None, None, None, None, None, 20, 'DPM++ 2M Karras', 4, 0, 1, 13, 1, 7, 1.5, 0.75, 0, 1260, 800, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x000002E2D34B51E0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E2D34B73A0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E2D34B6A10>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E2C7A9B1F0>, ['dc81d2a3d61f042a.png', '45a49135b4ec16c6.png'], [], True, 0, 0.8, 0, 0.8, 0.5, False, False, 0.5, 8192, -1.0, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Bilinear', False, 'Lerp', '', '', False, False, None, True, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, ' CFG Scale should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, False, 3.0) {} Traceback (most recent call last): File "C:\Users\User\sd\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(
args, kwargs)) File "C:\Users\User\sd\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, *kwargs) File "C:\Users\User\sd\stable-diffusion-webui\modules\img2img.py", line 208, in img2img processed = process_images(p) File "C:\Users\User\sd\stable-diffusion-webui\modules\processing.py", line 732, in process_images res = process_images_inner(p) File "C:\Users\User\sd\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "C:\Users\User\sd\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\marking.py", line 29, in process_sample return process.sample_before_CN_hack(*args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\modules\processing.py", line 1528, in sample samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning) File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "C:\Users\User\sd\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "C:\Users\User\sd\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] s_in, extra_args) File "C:\Users\User\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "C:\Users\User\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\User\sd\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input c_in, self.sigma_to_t(sigma), kwargs) File "C:\Users\User\sd\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, *kwargs: self(args, kwargs)) File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) File "C:\Users\User\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "C:\Users\User\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\User\sd\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 182, in newforward = self._fabric_old_forward(zs, ts, ctx) File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_unet.py", line 89, in UNetModel_forward return current_unet.forward(x, timesteps, context, args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 84, in forward raise ValueError( ValueError: Input shape must be divisible by 64 in both dimensions.


Problem 2:

When I then change Resize to "832 x 1280", Tensor RT tells me: "ValueError: No valid profile found. Please go to the TensorRT tab and generate an engine with the necessary profile. If using hires.fix, you need an engine for both the base and upscaled resolutions. Otherwise, use the default (torch) U-Net."

When I do this, it keeps throwing this error:

[I] Loading bytes from C:\Users\User\sd\stable-diffusion-webui\models\Unet-trt\aresMix_v01_0465e9a8_cc89_sample=1x4x64x64+2x4x64x64+8x4x96x96-timesteps=1+2+8-encoder_hidden_states=1x77x768+2x77x768+8x154x768.trt Profile 0: sample = [(1, 4, 64, 64), (2, 4, 64, 64), (8, 4, 96, 96)] timesteps = [(1,), (2,), (8,)] encoder_hidden_states = [(1, 77, 768), (2, 77, 768), (8, 154, 768)] latent = [(-1945965568), (-1945960960), (-1945960704)]

0%| | 0/16 [00:02<?, ?it/s] Error completing request Arguments: ('task(q81bgy6g2okuwaj)', 0, 'myprompt', 'myneg', [], <PIL.Image.Image image mode=RGBA size=848x1280 at 0x2E2CA26B0D0>, None, None, None, None, None, None, 20, 'DPM++ 2M Karras', 4, 0, 1, 13, 1, 7, 1.5, 0.75, 0, 1280, 832, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x000002E2CC5F6650>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E2C986F040>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E2C98BBE80>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E2C6657B20>, ['dc81d2a3d61f042a.png', '45a49135b4ec16c6.png'], [], True, 0, 0.8, 0, 0.8, 0.5, False, False, 0.5, 8192, -1.0, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Bilinear', False, 'Lerp', '', '', False, False, None, True, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, ' CFG Scale should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, False, 3.0) {} Traceback (most recent call last): File "C:\Users\User\sd\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(
args, kwargs)) File "C:\Users\User\sd\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, *kwargs) File "C:\Users\User\sd\stable-diffusion-webui\modules\img2img.py", line 208, in img2img processed = process_images(p) File "C:\Users\User\sd\stable-diffusion-webui\modules\processing.py", line 732, in process_images res = process_images_inner(p) File "C:\Users\User\sd\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "C:\Users\User\sd\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\marking.py", line 29, in process_sample return process.sample_before_CN_hack(*args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\modules\processing.py", line 1528, in sample samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning) File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "C:\Users\User\sd\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "C:\Users\User\sd\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] s_in, extra_args) File "C:\Users\User\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "C:\Users\User\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\User\sd\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input c_in, self.sigma_to_t(sigma), kwargs) File "C:\Users\User\sd\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, *kwargs: self(args, kwargs)) File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) File "C:\Users\User\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "C:\Users\User\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\User\sd\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 182, in newforward = self._fabric_old_forward(zs, ts, ctx) File "C:\Users\User\sd\stable-diffusion-webui\modules\sd_unet.py", line 89, in UNetModel_forward return current_unet.forward(x, timesteps, context, args, kwargs) File "C:\Users\User\sd\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 87, in forward self.switch_engine(feed_dict) File "C:\Users\User\sd\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 108, in switch_engine raise ValueError( ValueError: No valid profile found. Please go to the TensorRT tab and generate an engine with the necessary profile. If using hires.fix, you need an engine for both the base and upscaled resolutions. Otherwise, use the default (torch) U-Net.

left1000 commented 8 months ago

I get the same slash similar error.

`*** API error: POST: http://localhost:7860/sdapi/v1/txt2img {'error': 'ValueError', 'detail': '', 'body': '', 'errors': 'Input shape must be divisible by 64 in both dimensions.'}| 25/35 [00:14<00:00, 14.68it/s] Traceback (most recent call last): File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\anyio\streams\memory.py", line 98, in receive return self.receive_nowait() File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\anyio\streams\memory.py", line 93, in receive_nowait raise WouldBlock anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\middleware\base.py", line 78, in call_next
    message = await recv_stream.receive()
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\anyio\streams\memory.py", line 118, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\api\api.py", line 187, in exception_handling
    return await call_next(request)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\middleware\base.py", line 84, in call_next
    raise app_exc
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\middleware\base.py", line 70, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\middleware\base.py", line 108, in __call__
    response = await self.dispatch_func(request, call_next)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\api\api.py", line 151, in log_and_time
    res: Response = await call_next(req)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\middleware\base.py", line 84, in call_next
    raise app_exc
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\middleware\base.py", line 70, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\middleware\cors.py", line 92, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\middleware\cors.py", line 147, in simple_response
    await self.app(scope, receive, send)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\middleware\gzip.py", line 44, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\fastapi\routing.py", line 237, in app
    raw_response = await run_endpoint_function(
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\api\api.py", line 381, in text2imgapi
    processed = process_images(p)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\processing.py", line 734, in process_images
    res = process_images_inner(p)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\processing.py", line 869, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\processing.py", line 1161, in sample
    return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\processing.py", line 1247, in sample_hr_pass
    samples = self.sampler.sample_img2img(self, samples, noise, self.hr_c, self.hr_uc, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\sd_samplers_common.py", line 261, in launch_sampling
    return func()
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\sd_samplers_kdiffusion.py", line 188, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 626, in sample_dpmpp_2m_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\sd_samplers_cfg_denoiser.py", line 188, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "V:\AI images stuff\automatic1111 prebuilt\system\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\modules\sd_unet.py", line 89, in UNetModel_forward
    return current_unet.forward(x, timesteps, context, *args, **kwargs)
  File "V:\AI images stuff\automatic1111 prebuilt\webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 84, in forward
    raise ValueError(
ValueError: Input shape must be divisible by 64 in both dimensions`

The thing is I cannot track down what causes it, because I've changed nothing that I am doing. And I know for a fact that deleting my trt profiles and re-exporting them will fix the error.

In my opinion if this bug can be fixed by regenrating the profiles, the bug must exist in the model.json file.

I know the error mentions divisible by 64, but, of course I know that, I am already choosing my resolution numbers to be divisible by 64.

edit: I've tested recovering from this bug many times. Deleting all the files in my trt folder but not deleting the files in the onnx folder and then regenerating the profiles fixes this bug, but what causes it is vague to me and is why I've posted so much and so confusedly.

My logic though is that all of these errors are contained inside the models.json file. But since there is no way to ask the webui tab for this extension to delete a profile, there is no way to use force rebuild to fix the models.json file by re-generating only the profile that is to blame. If force rebuild could be used to fix this bug, then I could narrow down the cause of it better.