Closed LONGRYUU closed 3 years ago
So the output of sigmoid layer is a probability distribution over the 990 attributions, is it? And z is the feature vector before the sigmoid layer with size of 990?
So the output of sigmoid layer is a probability distribution over the 990 attributions, is it? And z is the feature vector before the sigmoid layer with size of 990?
Yes.
Are you planning to update your paper on arxiv? I'd like to try some ideas but in lack of the correct scores of your approach. Specifically, the results in Table 2 are not updated so I can not make fair comparison with your approach and other baselines you employed.
Are you planning to update your paper on arxiv? I'd like to try some ideas but in lack of the correct scores of your approach. Specifically, the results in Table 2 are not updated so I can not make fair comparison with your approach and other baselines you employed.
Yes, please wait 1 or 2 weeks.
I got some puzzles while reading your paper.
1.How to get the attribute vector z? More specifically, how to transfer the image features into vector z? In the paper, z is obtained from a feed-forward layer, what functions in pytorch did you use to combine this layer? Linear functions or convolutional layers? There could be several strategies to compress the 3-dimension image features into a vector.
2.In formulation 8, the subscript 1/n is put out of the brackets, is it a typo error? Does it mean β P(1) √(P(2)) or β √(P(1) P(2))?