fnzhan / UNITE

[CVPR 2022 Oral] Marginal Correspondence for Conditional Image Generation, [CVPR 2021] Unbalanced Feature Transport for Exemplar-based Image Translation
193 stars 27 forks source link

512 input size error occurs #7

Open peterkim333 opened 2 years ago

peterkim333 commented 2 years ago

Hi, I am thankful for being shared your code. I succeed in executing code with custom dataset. but, when I use large input size(from 256 to 512), I get this error

File "UNITE\models\networks\correspondence.py", line 312, in forward y1 = torch.matmul(f_divC, ref) RuntimeError: batch1 dim 2 must match batch2 dim 1

f_div_C size is doubled for width and height. if I change the tensor size, then next code makes error due to size unmatched.

I use loadsize=512 crop_size=512 label_nc = 2

please help me.

thank you.

fnzhan commented 2 years ago

Hi, how about the size of ref? You can print the size of ref to check if it matches with f_div_C.

peterkim333 commented 2 years ago

Hi, I would like to show you. I wonder if you can run code with 512 load size.

When I modify ade20k_dataset.py for custom dataset, parser.set_defaults(label_nc=2) parser.set_defaults(load_size=512) parser.set_defaults(crop_size=512)

Error: File "D:\Githubs\UNITE\models\networks\correspondence.py", line 308, in forward y1 = torch.matmul(f_divC, ref) RuntimeError: batch1 dim 2 must match batch2 dim 1

======================================================================

before operation of torch.matmul(f_divC, ref), checked sizes of each tensor like below: f_div_C = torch.Size([1, 16384, 16384]) ref_ = torch.Size([1, 4096, 3])

When I apply size "256, 256" I can run this code without any problem The size before 'matmul' operation,

f_div_C = torch.Size([1, 4096, 4096]) ref_ = torch.Size([1, 4096, 3])

My guess is that load_size, warp_stride, and the internal fixed size of the network are intertwined. All datasets are considered to be tested only at 256 size.