Open kendrick90 opened 2 years ago
I just encountered this in another ML project and it seems that many networks need it to be a multiple of 16 so that might have something to do with what's happening here.
Yeah, the upscaling functionality was kind of experimental so far. Thanks for spotting the issue, working on it!
Got another error when trying to upscale
2021-11-17 11:07:27.168 | INFO | __main__:listen_loop:317 - RECEIVED TOPIC upscale-generation 2021-11-17 11:07:27.214 | DEBUG | server.server_modelling:load_model:45 - LOADING esrgan... 2021-11-17 11:07:35.802 | DEBUG | server.server_modelling:load_model:75 - ESRGAN downloaded in server/.cache/RRDB_ESRGAN_x4.pth 2021-11-17 11:07:36.168 | DEBUG | server.server_modelling_utils:scale_crop_tensor:107 - IMAGE CROP SIZE: torch.Size([260, 430]) 2021-11-17 11:07:36.171 | DEBUG | server.server_modelling_utils:scale_crop_tensor:123 - SCALED CROP SIZE: (256, 416) 2021-11-17 11:07:36.174 | DEBUG | server.server_modelling_utils:scale_crop_tensor:107 - IMAGE CROP SIZE: torch.Size([260, 430]) 2021-11-17 11:07:36.176 | DEBUG | server.server_modelling_utils:scale_crop_tensor:123 - SCALED CROP SIZE: (256, 416) Exception in thread Thread-4: Traceback (most recent call last): File "C:\Users\Kendrick\anaconda3\envs\prosepaint\lib\threading.py", line 926, in _bootstrap_inner self.run() File "C:\Users\Kendrick\anaconda3\envs\prosepaint\lib\threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "server/server_deploy.py", line 290, in upscale_canvas num_chunks, File "C:\Users\Kendrick\Documents\GitHub\ProsePainter\server\server_modelling.py", line 446, in upscale_img upscaled_w, ] = upscaled_crop RuntimeError: The expanded size of the tensor (1664) must match the existing size (832) at non-singleton dimension 3. Target sizes: [1, 3, 1024, 1664]. Tensor sizes: [3, 512, 832]