YapengTian / TDAN-VSR-CVPR-2020

TDAN: Temporally-Deformable Alignment Network for Video Super-Resolution, CVPR 2020
MIT License
400 stars 62 forks source link

How to visualize the sampling positions? #37

Open pumpkinnan97 opened 4 years ago

pumpkinnan97 commented 4 years ago

Hello, your work is so great! I am interested in your implementation details about the visualization of the sampling position in Fig.5 and Fig.6 of your paper. Could you please tell me more about the implementation details about it? I can't find the code in this repo. Thanks!

YapengTian commented 4 years ago

Thanks for your interest in our work. Previously, I printed the predict offsets and visualized the receptive fields accordingly. Read the model.py L2O2-206, we can know that sampling positions are decided by the 3x3 kernels and two predicted offsets which are used to sample support frames.

For one pixel in the predicted LR frame, it is corresponding to 9 pixels in aligned_fea. Then, for a pixel in aligned_fea, it is corresponding to 9 pixels (actual number should be 9^8 since we use the dconv grounp=8. In practice, a lot of pixels might be the same and we just use the most far away pixels for each axis) in fea. Furthermore, we can find associated positions in supp. In this way, we can visualize all 9^3 positions. But note the actual receptive field with propagated to image-level should be even larger since we have feature extraction module before alignment.

xiaomingxige commented 4 years ago

I want to run on windows!

pumpkinnan97 commented 4 years ago

Thanks for your interest in our work. Previously, I printed the predict offsets and visualized the receptive fields accordingly. Read the model.py L2O2-206, we can know that sampling positions are decided by the 3x3 kernels and two predicted offsets which are used to sample support frames.

For one pixel in the predicted LR frame, it is corresponding to 9 pixels in aligned_fea. Then, for a pixel in aligned_fea, it is corresponding to 9 pixels (actual number should be 9^8 since we use the dconv grounp=8. In practice, a lot of pixels might be the same and we just use the most far away pixels for each axis) in fea. Furthermore, we can find associated positions in supp. In this way, we can visualize all 9^3 positions. But note the actual receptive field with propagated to image-level should be even larger since we have feature extraction module before alignment.

Thanks for your reply! Now I know that you print the dconv offsets in the image. I understand how to implement it in theory. But I still can't understand how to implement it in code. If that's convenient, can you show me the code of this part? I'm really very interested in your work, and your paper looks great so that I want to obtain more details, thanks again for your code in this repo and helping!

sunnyHelen commented 3 years ago

Where did you see the visualization of the sampling position? I also want to see that. Where can I get the supplementary? @YapengTian @pumpkinnan97

YapengTian commented 3 years ago

Please check the CVPR VERSION: https://openaccess.thecvf.com/content_CVPR_2020/papers/Tian_TDAN_Temporally-Deformable_Alignment_Network_for_Video_Super-Resolution_CVPR_2020_paper.pdf