QLYoo / LFPNet

GNU General Public License v3.0
41 stars 10 forks source link

Paper - code correspondence #9

Closed VolodymyrAhafonov closed 3 years ago

VolodymyrAhafonov commented 3 years ago

Hello! I am trying to compare paper description of neural network and real neural network from this repo. I've noticed this lines:

b,_,_,_=x.shape
setensor=torch.reshape(self.setensor,[b,1024,1,1])
setensor=torch.nn.functional.interpolate(setensor, (feat[-1].shape[2], feat[-1].shape[3]),mode='bilinear',align_corners=False)
featz=self.se1(feat[-1],setensor)

Right after context encoder and before CSPP module. According to code it looks like squeeze excitation block, but I don't see any mentions of this block in paper. Could you please clarify this difference?

Thank you in advance

Windaway commented 3 years ago

Yeah, it is a SEBlock and is designed for channel attention. However, I check the parameters and find that more parameters are of no use. XD

VolodymyrAhafonov commented 3 years ago

Thank you for quick reply! I have another small question. Do you have a supplementary materials for your paper with detailed layerwise description of neural net structure?

Windaway commented 3 years ago

Sorry, no.

VolodymyrAhafonov commented 3 years ago

Understood. Thank you for such a quick replies!