caojiezhang / VSR-Transformer

PyTorch implementation of VSR-Transformer
252 stars 28 forks source link

Questions about the Module "FeedForward" #11

Open nemoHy opened 3 years ago

nemoHy commented 3 years ago

Hello! I have some questions when I watch your code. I find that you use the same result of optical flows to warp those feature map for five times. Is that your original idea or just a mistake? For all the layers, flows are the same.

for attn, ff in self.layers: x = attn(x) x = ff(x, lrs=lrs, flows=flows) return x

in vsrTransformer_arch.py / class Transformer / function forward

Here is another problem. No matter "lq_size" is equl to 64 or others, these "assert" will always be True.

_assert lq_size == 64 or 48, "Default patch size of LR images during training and validation should be {}.".format(lqsize) assert overlap == 16 or 12, "Default overlap of patches during validation should be {}.".format(overlap)

in crop_validation.py / function forward_crop

I will be appreciated if you can reply to me as soon as you can. Thanks a lot.

caojiezhang commented 3 years ago

It is the original idea. If every frame is accurately aligned, we can directly use the optical flows as the prior information.

nemoHy commented 3 years ago

Thank you for your reply. : )