ayulockin / SwAV-TF

TensorFlow implementation of "Unsupervised Learning of Visual Features by Contrasting Cluster Assignments".
https://app.wandb.ai/authors/swav-tf/reports/Unsupervised-Visual-Representation-Learning-with-SwAV--VmlldzoyMjg3Mzg
Apache License 2.0
85 stars 12 forks source link

Pointers on Train_Step_And_Loss.ipynb #1

Closed sayakpaul closed 4 years ago

sayakpaul commented 4 years ago
sayakpaul commented 4 years ago

I think we can now separate out the modules (multi-crop augmentation pipeline and the architecture) in .py files and import them in Colab directly to make it a bit less-lengthy. This is just a suggestion and I know the notebook under consideration is just for a dry run.

ayulockin commented 4 years ago

Are you meaning that the logits and labels in criterion will be swapped? If so, how?

Currently computing subloss like this: subloss -= criterion(labels=q, logits=p_unscaled)

The criterion here will be tf.nn.softmax_cross_entropy_with_logits in my opinion. The choice of labels and logits here is coming from the assumption that we are learning to predict the code from the assigned cluster. If this assumption is flawed then labels will be p_unscaled and logits will be q.

I think it might be even better to just replicate the following as the authors have done in here. What do you think?

Yes replicating this would be better. It's much more readable.

ayulockin commented 4 years ago

Two normalizations. First, they normalize the embeddings they get from the RN50 backbone, then they pass it through a linear layer (prototype). While training they again normalize this prototype vector.

It seems so. The first normalization is done in the forward_head (here) and they are taking the weights and normalizing it before the start of each epoch (here).

But in the first normalization it's normalizing the embedding(128 vector) and in the second they are normalizing the weights from the "prototype" layer.

(Wrote this so that we are on same page and if I am mistaken then seeking correction.)

sayakpaul commented 4 years ago

Right on both the fronts!

ayulockin commented 4 years ago

Great.

We need to trace what is non-trainable and what is. In the main SwAV code, all the variables with no_grad tag are basically non-trainable.

I might need some help here.

sayakpaul commented 4 years ago

@ayulockin we can accomplish this using the following options I assume:

But we will figure out more as we proceed.