gram-ai / capsule-networks

A PyTorch implementation of the NIPS 2017 paper "Dynamic Routing Between Capsules".
https://arxiv.org/abs/1710.09829
1.73k stars 315 forks source link

Why dim=2 here? I guess the softmax is taken over all capsules in the layer above, so shouldn't that be dim=0? #21

Open Cranial-XIX opened 6 years ago

Cranial-XIX commented 6 years ago

https://github.com/gram-ai/capsule-networks/blob/1a4edd27a0ed73232cb266c85091d712854f3e71/capsule_network.py#L69

InnovArul commented 6 years ago

I had the same question.

the dimension of logits is [10, 100, 1152, 1, 16]. i.e., [num_digits, batch_size, num_prevlayer_capsules, 1, digits_features_dim]

since the softmax is taken over dim=2, the softmax works on 'number of primary capsules = 1152'.

The explanation seems to me as below:

Every digit capsule (=10) should select which of the 1152 primary capsules to be accepted for its decision. But in the paper and other places (youtube, blogs etc), it has been told that each of 1152 primary capsules will decide to which of the 10 digit capsules it will send its output to. Seems there is a misunderstanding.

In simple words, each of the 1152 lower level capsules will be fighting to send its output to 10 digit capsules. Hence, the softmax seems to be on dim=2.

CoderHHX commented 5 years ago

I have the same question and I read some other implementation like Tensorflow and PyTorch for CapsuleNet and I think that softmax logits [10, 100, 1152, 1, 16] should apply to dim 0. probs = softmax ( logits, dim=0 ) as the original paper presents.

InnovArul commented 5 years ago

I think, below are the explanations for softmax along dim=0/dim=2:

  1. softmax along dim=0: Each primary capsule (= 1152) decides how much information it passes to each of the digit capsules (=10). (According to paper)

  2. softmax along dim=2: Each digit capsule (= 10) chooses how much information it takes from each of the primary capsules (=1152). (According to this implementation)

Since 1 & 2 gives the same performance (more or less), I am not sure how to reason it. @CoderHHX Do you have any intuition?

CoderHHX commented 5 years ago

@InnovArul Thanks for your reply! In my opinion, if we want to follow the original paper, we should set the dim equal 0. And as you say that with the dim equal 2, the model can achieve similar performance with the original one. I think this may be caused by the equivalent effect that routing the weights based on PrimayCaps or DigCaps. Both of these ways can achieve capsule transformation.