raeidsaqur / CapsGAN

Capsule GAN: Unsupervised representation learning with CapsNet based Generative Adversarial Networks
MIT License
57 stars 7 forks source link

CapsNet #6

Open ussaema opened 6 years ago

ussaema commented 6 years ago

are you using CapsNet using dynamic routing or EM routing. The title of your paper does not match your content, therefore I was confused! And did you modify anything in the structure of the baseline CapsNet or the generator with respect to WGAN ?

Ryanglambert commented 6 years ago

If you look in the model file you'll see one convenient layer, a primary layer and a digit caps layer.

This pattern is also what the dynamic routing paper used.

EM paper had many more repeating layers.

On Sun, Jul 29, 2018, 8:21 AM ussaema notifications@github.com wrote:

are you using CapsNet using dynamic routing or EM routing. The title of your paper does not match your content, therefore I was confused! And did you modify anything in the structure of the baseline CapsNet or the generator with respect to WGAN ?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/raeidsaqur/CapsGAN/issues/6, or mute the thread https://github.com/notifications/unsubscribe-auth/ADmCPLNoOVMyxcW-zq4pn0Hy_V3CYJO3ks5uLajmgaJpZM4VlZqV .

ussaema commented 6 years ago

I already looked at the code in models directory, It is CapsuleNet using dynamic routing with margin loss. EM paper has indeed more repeating layers, but the routing (clustering of the pose predictions) is not based on projections (cosine similarities). It is, in fact, based on a soft version of k-mean clustering which is EM clustering. So, I don't think it is just about the number of capsule layers used, but on the efficiency of the routing and the loss function.

Ryanglambert commented 6 years ago

Totally

I was only remarking on the layers because it's easy to see quickly. Every implementation Ive seen has used the same number of layers.

On Sun, Jul 29, 2018, 9:15 AM ussaema notifications@github.com wrote:

I already looked at the code in models directory, It is CapsuleNet using dynamic routing with margin loss. EM paper has indeed more repeating layers, but the routing (clustering of the pose predictions) is not based on projections (cosine similarities). It is, in fact, based on a soft version of k-mean clustering which is EM clustering. So, I don't think it is just about the number of capsule layers used, but on the efficiency of the routing and the loss function.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/raeidsaqur/CapsGAN/issues/6#issuecomment-408677132, or mute the thread https://github.com/notifications/unsubscribe-auth/ADmCPJjlMtGrw7Kta7MuvCfu5wDywVLwks5uLbVxgaJpZM4VlZqV .

Ryanglambert commented 6 years ago

I have yet to dig into EM routing! Still honestly haven't had the time to sit down and understand capsule nets more generally

On Sun, Jul 29, 2018, 11:46 AM Ryan Lambert ryan.g.lambert@gmail.com wrote:

Totally

I was only remarking on the layers because it's easy to see quickly. Every implementation Ive seen has used the same number of layers.

On Sun, Jul 29, 2018, 9:15 AM ussaema notifications@github.com wrote:

I already looked at the code in models directory, It is CapsuleNet using dynamic routing with margin loss. EM paper has indeed more repeating layers, but the routing (clustering of the pose predictions) is not based on projections (cosine similarities). It is, in fact, based on a soft version of k-mean clustering which is EM clustering. So, I don't think it is just about the number of capsule layers used, but on the efficiency of the routing and the loss function.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/raeidsaqur/CapsGAN/issues/6#issuecomment-408677132, or mute the thread https://github.com/notifications/unsubscribe-auth/ADmCPJjlMtGrw7Kta7MuvCfu5wDywVLwks5uLbVxgaJpZM4VlZqV .

ussaema commented 6 years ago

I understand :) you are using the same loss (margin loss) for the generator and the discriminator, right ?

Ryanglambert commented 6 years ago

In my project I was not using GAN, just capsule network by itself. Yes I was using margin loss.