RuntimeError: The size of tensor a (64) must match the size of tensor b (128) at non-singleton dimension 3
I don't know why, but no matter what settings I use, similar messages are displayed, only the numbers are different.
I am using the SDXL model. Only the engine exported with the default settings works, but not with any other settings.
I'm sure my image size is in the right range and is a multiple of 64.
To create a public link, set `share=True` in `launch()`.
Startup time: 11.6s (prepare environment: 3.7s, import torch: 1.9s, import gradio: 0.6s, setup paths: 0.4s, initialize shared: 0.2s, other imports: 0.3s, load scripts: 1.1s, create ui: 0.7s, gradio launch: 2.3s, app_started_callback: 0.4s).
Loading VAE weights specified in settings: D:\sd-webui-aki-v4.6.1\models\VAE\xlVAEC_c1.safetensors
Applying attention optimization: sdp-no-mem... done.
Model loaded in 4.9s (load weights from disk: 1.0s, create model: 0.4s, apply weights to model: 2.5s, calculate empty prompt: 0.7s).
*** Error running process: D:\sd-webui-aki-v4.6.1\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py
Traceback (most recent call last):
File "D:\sd-webui-aki-v4.6.1\modules\scripts.py", line 718, in process
script.process(p, *script_args)
File "D:\sd-webui-aki-v4.6.1\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 255, in process
self.idx, self.hr_idx = self.get_profile_idx(p, p.sd_model_name, ModelType.UNET)
File "D:\sd-webui-aki-v4.6.1\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 152, in get_profile_idx
) = modelmanager.get_valid_models(
File "D:\sd-webui-aki-v4.6.1\extensions\Stable-Diffusion-WebUI-TensorRT\model_manager.py", line 225, in get_valid_models
for i, model in enumerate(models[base_model]):
KeyError: '08Ponydiffusionv6xlV602_v10'
---
Activating unet: [TRT] Pony Diffusion V6_08Ponydiffusionv6xlV602_v10
Loading TensorRT engine: D:\sd-webui-aki-v4.6.1\models\Unet-trt\Pony Diffusion V6_08Ponydiffusionv6xlV602_v10_97e92c9a_cc89_sample=2x4x128x128+2x4x128x128+2x4x128x128-timesteps=2+2+2-encoder_hidden_states=2x77x2048+2x77x2048+2x77x2048-y=2x2816+2x2816+2x2816.trt
Loaded Profile: 0
sample = [(2, 4, 128, 128), (2, 4, 128, 128), (2, 4, 128, 128)]
timesteps = [(2,), (2,), (2,)]
encoder_hidden_states = [(2, 77, 2048), (2, 77, 2048), (2, 77, 2048)]
y = [(2, 2816), (2, 2816), (2, 2816)]
latent = [(2, 4, 128, 128), (2, 4, 128, 128), (2, 4, 128, 128)]
*** Error running process_batch: D:\sd-webui-aki-v4.6.1\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py
Traceback (most recent call last):
File "D:\sd-webui-aki-v4.6.1\modules\scripts.py", line 742, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "D:\sd-webui-aki-v4.6.1\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 304, in process_batch
sd_unet.current_unet.switch_engine()
File "D:\sd-webui-aki-v4.6.1\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 87, in switch_engine
self.loaded_config = self.configs[self.profile_idx]
TypeError: list indices must be integers or slices, not NoneType
---
[W] 'colored' module is not installed, will not use colors when logging. To enable colors, please install the 'colored' module: python3 -m pip install colored
[E] 3: [executionContext.cpp::nvinfer1::rt::ExecutionContext::validateInputBindings::2046] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::nvinfer1::rt::ExecutionContext::validateInputBindings::2046, condition: profileMinDims.d[i] <= dimensions.d[i]. Supplied binding dimension [2,4,64,64] for bindings[0] exceed min ~ max range at index 2, maximum dimension in profile is 128, minimum dimension in profile is 128, but supplied dimension is 64.
)
*** Error completing request
*** Arguments: ('task(o4amkyv44vlejjo)', 'score_9,score_8_up,score_7_up,score_6_up,score_5_up,score_4_up,source_anime,Ultra-HD-details,Ultra-HD-quality-details,1girl,solo BREAK ', 'full body,3d,photorealistic,photoreal,realism,realistic,octane render,', [], 20, 'Euler a', 1, 1, 7, 512, 512, False, 1, 2, 'Latent', 20, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000018329250280>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\sd-webui-aki-v4.6.1\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\sd-webui-aki-v4.6.1\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\sd-webui-aki-v4.6.1\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "D:\sd-webui-aki-v4.6.1\modules\processing.py", line 734, in process_images
res = process_images_inner(p)
File "D:\sd-webui-aki-v4.6.1\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 41, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "D:\sd-webui-aki-v4.6.1\modules\processing.py", line 868, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\sd-webui-aki-v4.6.1\modules\processing.py", line 1142, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "D:\sd-webui-aki-v4.6.1\modules\sd_samplers_kdiffusion.py", line 235, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\sd-webui-aki-v4.6.1\modules\sd_samplers_common.py", line 261, in launch_sampling
return func()
File "D:\sd-webui-aki-v4.6.1\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\sd-webui-aki-v4.6.1\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\sd-webui-aki-v4.6.1\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\sd-webui-aki-v4.6.1\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\sd-webui-aki-v4.6.1\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.6.1\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "D:\sd-webui-aki-v4.6.1\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\sd-webui-aki-v4.6.1\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.6.1\repositories\k-diffusion\k_diffusion\external.py", line 113, in forward
return input + eps * c_out
RuntimeError: The size of tensor a (64) must match the size of tensor b (128) at non-singleton dimension 3
---
*** Error running process: D:\sd-webui-aki-v4.6.1\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py
Traceback (most recent call last):
File "D:\sd-webui-aki-v4.6.1\modules\scripts.py", line 718, in process
script.process(p, *script_args)
File "D:\sd-webui-aki-v4.6.1\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 255, in process
self.idx, self.hr_idx = self.get_profile_idx(p, p.sd_model_name, ModelType.UNET)
File "D:\sd-webui-aki-v4.6.1\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 152, in get_profile_idx
) = modelmanager.get_valid_models(
File "D:\sd-webui-aki-v4.6.1\extensions\Stable-Diffusion-WebUI-TensorRT\model_manager.py", line 225, in get_valid_models
for i, model in enumerate(models[base_model]):
KeyError: '08Ponydiffusionv6xlV602_v10'
RuntimeError: The size of tensor a (64) must match the size of tensor b (128) at non-singleton dimension 3 I don't know why, but no matter what settings I use, similar messages are displayed, only the numbers are different. I am using the SDXL model. Only the engine exported with the default settings works, but not with any other settings. I'm sure my image size is in the right range and is a multiple of 64.