yuanzhi-zhu / DiffPIR

"Denoising Diffusion Models for Plug-and-Play Image Restoration", Yuanzhi Zhu, Kai Zhang, Jingyun Liang, Jiezhang Cao, Bihan Wen, Radu Timofte, Luc Van Gool.
https://yuanzhi-zhu.github.io/DiffPIR/
MIT License
337 stars 25 forks source link

hi,I tried to reason with my own photo when it showed this. #15

Closed webe998 closed 11 months ago

webe998 commented 11 months ago

E:\DiffPIR>python main_ddpir_sisr.py LogHandlers setup! 23-07-30 10:47:38.869 : model_name:diffusion_ffhq_10m, sr_mode:blur, image sigma:0.050, model sigma:0.050 23-07-30 10:47:38.871 : eta:0.000, zeta:0.100, lambda:1.000, guidance_scale:1.00 23-07-30 10:47:38.871 : start step:999, skip_type:quad, skip interval:10, skipstep analytic steps:0 23-07-30 10:47:38.871 : analytic iter num:1, gamma:0.01 23-07-30 10:47:38.871 : Model path: model_zoo\diffusion_ffhq_10m.pt 23-07-30 10:47:38.871 : C:\Users\weber\Desktop\my_lq Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off] C:\Users\weber\AppData\Local\Programs\Python\Python310\lib\site-packages\torchvision\models_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. warnings.warn( C:\Users\weber\AppData\Local\Programs\Python\Python310\lib\site-packages\torchvision\models_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=VGG16_Weights.IMAGENET1K_V1. You can also use weights=VGG16_Weights.DEFAULT to get the most up-to-date weights. warnings.warn(msg) Loading model from: C:\Users\weber\AppData\Local\Programs\Python\Python310\lib\site-packages\lpips\weights\v0.1\vgg.pth 23-07-30 10:47:39.953 : --------- sf:4 --k: 0 --------- 23-07-30 10:47:39.955 : eta:0.000, zeta:0.250, lambda:2.000, inIter:1.000, gamma:0.010, guidance_scale:1.00 Traceback (most recent call last): File "E:\DiffPIR\main_ddpir_sisr.py", line 502, in main() File "E:\DiffPIR\main_ddpir_sisr.py", line 485, in main test_results_ave = testrho(lambda, zeta=zeta_i, model_output_type=model_output_type) File "E:\DiffPIR\main_ddpir_sisr.py", line 298, in test_rho x0 = utils_model.model_fn(x, noise_level=curr_sigma255, model_out_type=model_out_type, \ File "E:\DiffPIR\utils\utils_model.py", line 221, in model_fn out = diffusion.p_sample( File "E:\DiffPIR\guided_diffusion\gaussian_diffusion.py", line 422, in p_sample out = self.p_mean_variance( File "E:\DiffPIR\guided_diffusion\respace.py", line 91, in p_mean_variance return super().p_mean_variance(self._wrap_model(model), args, kwargs) File "E:\DiffPIR\guided_diffusion\gaussian_diffusion.py", line 260, in p_mean_variance model_output = model(x, self._scale_timesteps(t), model_kwargs) File "E:\DiffPIR\guided_diffusion\respace.py", line 128, in call return self.model(x, new_ts, *kwargs) File "C:\Users\weber\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl return forward_call(input, **kwargs) File "E:\DiffPIR\guided_diffusion\unet.py", line 660, in forward h = th.cat([h, hs.pop()], dim=1) RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 24 but got size 25 for tensor number 1 in the list.

yuanzhi-zhu commented 11 months ago

@webe998 I think this error comes from the incorrect size of the input image.

Due to the limit of the pre-trained models we use, the images should better have a size 256 * 256.

Best,