Open Atcold opened 6 years ago
Though each capsules norm is a probability [0-1], the capsules will be fighting within themselves to send the info to higher level capsules (based on their correlation with the output of the higher level capsules). Hence, there is a softmax layer.
That's not how Capsules work...
Maybe if you could write your understanding about capsules or point out the lines in the paper, it will be helpful to discuss and learn I guess. Anyway, I will let the code owner to clarify your doubts.
In my understanding, more the correlation between primary capsule's output to digit capsule's output, the higher the bond between them. Hence, it's a kind of attention mechanism between primary capsules and digit capsules, which necessitates the need for a softmax (based on correlation).
From the paper, section 4, last paragraph, you have that
Our implementation [...] minimize the sum of the margin losses in Eq. 4.
(Install this extension to view LaTeX on GitHub.)
$L_k = T_k \max(0, m^+ - ||v_k||)^2 + λ (1 - T_k) \max(0, ||v_k|| - m^-)^2$
So, as you can see, you're supposed to use $||v_k||$, which is classes = (x ** 2).sum(dim=-1) ** 0.5
.
Oh I see. My bad. I didn't see which softmax you are mentioning:)
I think you are right. There is no need for softmax (since the vector's magnitude emulates probability). Thanks for elaborating it.
By the way, I have noticed some more deviations in the implementation with respect to paper. Please check if you find time. I'm not sure if my interpretation is correct.
Why is there an extra softmax layer https://github.com/gram-ai/capsule-networks/blob/master/capsule_network.py#L106? Each capsule's norm is already modelling a probability.