Open lxtGH opened 5 years ago
Hi! Thanks for sharing your code. After I read your paper, I found this idea is very interesting and it is a little like SN(switchable normalization). I have a little question about your paper and implementation.
- why use GAP why not use 2*2 pooling or ASPP?(Memory issue or computation cost?)
- Why you add two fc layer after GAP in pixel aware ACnet?
- Have you did some count importance of each individual learned parameters like SN.
Thank you for the interest in our work. 1) Actually, GAP can be replaced with other types of downsampling (including image resizing) without losing accuracies. We employ GAP here to save memory, computational time, and PARAMETERs.
In fact, one fc layer is also OK, depending on the the number of the channels.
We have visualized them and found that different layers/pixels/images have different importance degrees.
Hi! Thanks for your reply. In your implementation of pixel-aware convolution, this seems to be contradictory to Equation 3 in your paper, I don't understand the relationship between Equation 3 and your code.
Hi! Thanks for your reply. In your implementation of pixel-aware convolution, this seems to be contradictory to Equation 3 in your paper, I don't understand the relationship between Equation 3 and your code.
Thanks. When using one fc layer, Eqn. 3 is equivalent to the code. Note that $\Sigma \mathbf{x}\mathbf{u}$ is omitted for parameter-saving. Please refer to the experimental section (last few lines, left column, Page 5).
@lxtGH Hello, I have also seen this article. Is it ok if I use other data for this article?If so, what should be done with this data?Please give me some help. Thank you.
@lxtGH Hello, I have also seen this article. Is it ok if I use other data for this article?If so, what should be done with this data?Please give me some help. Thank you.
I have the same question
Sorry I don't know either, please contact the author.
Hi! Thanks for sharing your code. After I read your paper, I found this idea is very interesting and it is a little like SN(switchable normalization). I have a little question about your paper and implementation.