[X] I have searched the existing issues and checked the recent builds/commits
What happened?
User friendly error message:
Error: bad shape for TensorRT input x: (2, 4, 67, 120). Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of .
2023-05-31 18:02:13,417 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-05-31 18:02:13,427 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
Steps to reproduce the problem
Go to ....
Press ....
...
What should have happened?
Deforum give me this and they say there is not support for TensorRT, but is it in the TensorRT model the problem is it is not them it is the NVIDIA product.
Commit where the problem happens
Bad Shape For TensorRT
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
Nvidia GPUs (RTX 20 above)
What browsers do you use to access the UI ?
Microsoft Edge
Command Line Arguments
NO
List of extensions
Deforum, TensorRT
Console logs
venv "C:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Version: v1.3.0-72-gb957dcfe
Commit hash: b957dcfece29c84ac0cfcd5a69475ff8684c531f
Installing requirements
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
ControlNet v1.1.204
ControlNet v1.1.204
Loading weights [6ce0161689] from C:\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Create LRU cache (max_size=16) for preprocessor results.
Create LRU cache (max_size=16) for preprocessor results.
Creating model from config: C:\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
*Deforum ControlNet support: enabled*
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Create LRU cache (max_size=16) for preprocessor results.
Startup time: 34.2s (import torch: 11.3s, import gradio: 9.4s, import ldm: 2.2s, other imports: 4.8s, setup codeformer: 0.4s, load scripts: 4.1s, initialize extra networks: 0.2s, create ui: 1.4s, gradio launch: 0.4s).
Applying attention optimization: sdp-no-mem... done.
Textual inversion embeddings loaded(0):
Model loaded in 6.4s (load weights from disk: 0.6s, create model: 0.9s, apply weights to model: 1.1s, apply half(): 0.9s, move model to device: 2.6s, calculate empty prompt: 0.3s).
Deforum extension for auto1111 webui, v2.4b
Git commit: 87340181
Saving animation frames to:
C:\stable-diffusion-webui\outputs/img2img-images\Deforum_20230531180203
Loading MiDaS model from dpt_large-midas-2f21e586.pt...
Animation frame: 0/600
Seed: 912572441
Prompt: beautiful teeth, aarhus, straight teeth, Scandinavian people, 2020 Marsterpiece, happy family, scandinavian, baby in a cradle, mohark, rebel, smiling, laughing, Aarhus, outside, golden hour, ultra high detail, 8k, unreal engine
Neg Prompt: nsfw, nude, split screen, ugly teeth
Not using an init image (doing pure txt2img)
╭─────┬───┬───────┬────┬────┬────┬────┬────┬────╮
│Steps│CFG│Denoise│Tr X│Tr Y│Tr Z│Ro X│Ro Y│Ro Z│
├─────┼───┼───────┼────┼────┼────┼────┼────┼────┤
│ 25 │7.0│ 0 │0.65│ 0 │0.2 │ 0 │ 0 │ 0 │
╰─────┴───┴───────┴────┴────┴────┴────┴────┴────╯
Activating unet: [TRT] v1-5-pruned-emaonly
[05/31/2023-18:02:12] [TRT] [W] TensorRT was linked against cuDNN 8.9.0 but loaded cuDNN 8.7.0
[05/31/2023-18:02:12] [TRT] [W] TensorRT was linked against cuDNN 8.9.0 but loaded cuDNN 8.7.0
[05/31/2023-18:02:13] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
0%| | 0/25 [00:00<?, ?it/s][05/31/2023-18:02:13] [TRT] [E] 3: [executionContext.cpp::nvinfer1::rt::ExecutionContext::validateInputBindings::2082] Error Code 3: API Usage Error (Parameter check failed at: executionContext.cpp::nvinfer1::rt::ExecutionContext::validateInputBindings::2082, condition: profileMaxDims.d[i] >= dimensions.d[i]. Supplied binding dimension [2,4,67,120] for bindings[0] exceed min ~ max range at index 2, maximum dimension in profile is 64, minimum dimension in profile is 64, but supplied dimension is 67.
)
0%| | 0/25 [00:00<?, ?it/s]
*START OF TRACEBACK*
Traceback (most recent call last):
File "C:\stable-diffusion-webui\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\run_deforum.py", line 76, in run_deforum
render_animation(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, root)
File "C:\stable-diffusion-webui\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\render.py", line 547, in render_animation
image = generate(args, keys, anim_args, loop_args, controlnet_args, root, frame_idx, sampler_name=scheduled_sampler_name)
File "C:\stable-diffusion-webui\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\generate.py", line 55, in generate
return generate_inner(args, keys, anim_args, loop_args, controlnet_args, root, frame, sampler_name)
File "C:\stable-diffusion-webui\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\generate.py", line 207, in generate_inner
processed = processing.process_images(p_txt)
File "C:\stable-diffusion-webui\modules\processing.py", line 611, in process_images
res = process_images_inner(p)
File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\stable-diffusion-webui\modules\processing.py", line 731, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\stable-diffusion-webui\modules\processing.py", line 979, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 433, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 275, in launch_sampling
return func()
File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 433, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 518, in sample_dpmpp_2s_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 155, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
return self.__orig_func(*args, **kwargs)
File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stable-diffusion-webui\modules\sd_unet.py", line 89, in UNetModel_forward
return current_unet.forward(x, timesteps, context, *args, **kwargs)
File "C:\stable-diffusion-webui\extensions\stable-diffusion-webui-tensorrt\scripts\trt.py", line 86, in forward
self.infer({"x": x, "timesteps": timesteps, "context": context})
File "C:\stable-diffusion-webui\extensions\stable-diffusion-webui-tensorrt\scripts\trt.py", line 69, in infer
self.allocate_buffers(feed_dict)
File "C:\stable-diffusion-webui\extensions\stable-diffusion-webui-tensorrt\scripts\trt.py", line 63, in allocate_buffers
raise Exception(f'bad shape for TensorRT input {binding}: {tuple(shape)}')
Exception: bad shape for TensorRT input x: (2, 4, 67, 120)
*END OF TRACEBACK*
User friendly error message:
Error: bad shape for TensorRT input x: (2, 4, 67, 120). Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of \.
2023-05-31 18:02:13,417 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-05-31 18:02:13,427 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
Is there an existing issue for this?
What happened?
User friendly error message: Error: bad shape for TensorRT input x: (2, 4, 67, 120). Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of . 2023-05-31 18:02:13,417 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK" 2023-05-31 18:02:13,427 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
Steps to reproduce the problem
What should have happened?
Deforum give me this and they say there is not support for TensorRT, but is it in the TensorRT model the problem is it is not them it is the NVIDIA product.
Commit where the problem happens
Bad Shape For TensorRT
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
Nvidia GPUs (RTX 20 above)
What browsers do you use to access the UI ?
Microsoft Edge
Command Line Arguments
List of extensions
Deforum, TensorRT
Console logs
Additional information
No response