AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
138.35k stars 26.29k forks source link

error LDSR upscale x4 at first DDIM step #2304

Open Ehplodor opened 1 year ago

Ehplodor commented 1 year ago

Describe the bug rror at first DDIM step of LDSRupscale (x4)

To Reproduce try to x4 upscale using LDSR

Expected behavior image upscaled x4

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

Additional context Loading model from C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\models\LDSR\model.ckpt LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 113.62 M params. Keeping EMAs of 308. making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 3, 64, 64) = 12288 dimensions. making attention of type 'vanilla' with 512 in_channels Down sample rate is 1 from 4 / 4 (Not downsampling) reducing Kernel Plotting: Switched to EMA weights Sampling with eta = 1.0; steps: 100 Data shape for DDIM sampling is (1, 3, 70, 70), eta 1.0 Running DDIM Sampling with 100 timesteps 0%| | 0/100 [00:00<?, ?it/s]Plotting: Restored training weights Error completing request Arguments: (0, <PIL.Image.Image image mode=RGB size=70x70 at 0x1B052E32BF0>, None, 0, 1, 0, 4, 2, 0, 1) {} Traceback (most recent call last): File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\modules\ui.py", line 182, in f res = list(func(*args, kwargs)) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\webui.py", line 69, in f res = func(*args, *kwargs) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\modules\extras.py", line 85, in run_extras res = upscale(image, extras_upscaler_1, upscaling_resize) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\modules\extras.py", line 79, in upscale c = upscaler.scaler.upscale(image, resize, upscaler.data_path) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\modules\upscaler.py", line 62, in upscale img = self.do_upscale(img, selected_model) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\modules\ldsr_model.py", line 54, in do_upscale return ldsr.super_resolution(img, ddim_steps, self.scale) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\modules\ldsr_model_arch.py", line 113, in super_resolution logs = self.run(model["model"], im_og, diffusion_steps, eta) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\modules\ldsr_model_arch.py", line 76, in run logs = make_convolutional_sample(example, model, File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(args, kwargs) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\modules\ldsr_model_arch.py", line 200, in make_convolutional_sample sample, intermediates = convsample_ddim(model, c, steps=custom_steps, shape=z.shape, File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, kwargs) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\modules\ldsr_model_arch.py", line 156, in convsample_ddim samples, intermediates = ddim.sample(steps, batch_size=bs, shape=shape, conditioning=cond, callback=callback, File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, *kwargs) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddim.py", line 96, in sample samples, intermediates = self.ddim_sampling(conditioning, size, File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(args, kwargs) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddim.py", line 149, in ddim_sampling outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, *kwargs) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddim.py", line 172, in p_sample_ddim e_t = self.model.apply_model(x, t, c) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 910, in apply_model fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 619, in get_fold_unfold normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(input, **kwargs) File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\fold.py", line 144, in forward return F.fold(input, self.output_size, self.kernel_size, self.dilation, File "C:\AI\SD\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 4696, in fold return torch._C._nn.col2im( RuntimeError: Expected 2D or 3D (batch mode) tensor for input with possibly 0 batch size and non-zero dimensions for input, but got: [1, 16384, 0]

0%| | 0/100 [00:00<?, ?it/s]

and... freeze

TheOnlyHolyMoly commented 1 year ago

Issue ready to be closed after patching?