Open wangq95 opened 4 years ago
You got me. Actually, the authors of the paper didn't mention the channel number of FFM, so I set them to 1 subjectively. I will investigate into this and change the number of channels to see what happens when I'm off from work.
Hi, @lewisluk , thank you for your work. But I have a question about the implementation of Feature Fusion Module(FFM) in model.py, which you directly reduce the input feature to only one channel, and then learn a channel-wise attention. The attention is a scaler as the number of channel is one, that is abnormal. Can you explain it? Thanks a lot.