Thx for the great work !
The scale factor in this framework is fixed to 4.
Is there any way to adapt it into other factors, e.g., 2, 8, and 10?
I have tried scale factor at 2. However this would cause the forwarding to fail at "x11_res = torch.cat((x11_res, T_lv3), dim=1)" in the MainNet model, which appeared to be a size mismatched issue:
Sizes of tensors must match except in dimension 2. Got 80 and 160 (The offending index is 0)
So I guess the model is written only to handle and fuse features at a scale factor of 4 so that the LR feature map (80 x80 ) will match the size of T_lv3 (80x80), which is a feature map of HR being downscaled twice with factor 2.
I was wondering whether it was cool to replace the T_lv3 with a shallower feature map like T_lv2 to adapt to the case of scale factor 2.
Plz, give me any clue.
Our CSFI module is for x4 scale super-resolution and the TT is for x1 x2 x4 three scales. If you want to adapt the model to other scales, you can modify TT and CSFI modules for your need.
Thx for the great work ! The scale factor in this framework is fixed to 4. Is there any way to adapt it into other factors, e.g., 2, 8, and 10? I have tried scale factor at 2. However this would cause the forwarding to fail at "x11_res = torch.cat((x11_res, T_lv3), dim=1)" in the MainNet model, which appeared to be a size mismatched issue: Sizes of tensors must match except in dimension 2. Got 80 and 160 (The offending index is 0) So I guess the model is written only to handle and fuse features at a scale factor of 4 so that the LR feature map (80 x80 ) will match the size of T_lv3 (80x80), which is a feature map of HR being downscaled twice with factor 2. I was wondering whether it was cool to replace the T_lv3 with a shallower feature map like T_lv2 to adapt to the case of scale factor 2. Plz, give me any clue.