Lavender105 / DFF

Code for Dynamic Feature Fusion for Semantic Edge Detection https://arxiv.org/abs/1902.09104
MIT License
220 stars 51 forks source link

Upsampling differences between CaseNet Paper and Code #28

Open maopal opened 2 years ago

maopal commented 2 years ago

Hi there thanks for your Research, its a real Gold Mine, I just have a couple questions:

https://github.com/Lavender105/DFF/blob/152397cec4a3dac2aa86e92a65cc27e6c8016ab9/exps/models/casenet.py

Firstly, In the CaseNet paper up sampling is used for Side 1, but in your code no up sampling was applied to Side 1. CaseNet

Secondly, the paper mentions "bi-linear upsampling". Which would require the use of this pytorch function torch.nn.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None) and setting mode to 'bilinear'. Why is ConvTranspose2d used in your code?

Thanks in advance!