Closed sayakpaul closed 4 years ago
I think we can now separate out the modules (multi-crop augmentation pipeline and the architecture) in .py
files and import them in Colab directly to make it a bit less-lengthy. This is just a suggestion and I know the notebook under consideration is just for a dry run.
Are you meaning that the logits and labels in criterion will be swapped? If so, how?
Currently computing subloss like this: subloss -= criterion(labels=q, logits=p_unscaled)
The criterion here will be tf.nn.softmax_cross_entropy_with_logits
in my opinion. The choice of labels and logits here is coming from the assumption that we are learning to predict the code from the assigned cluster. If this assumption is flawed then labels
will be p_unscaled
and logits
will be q
.
I think it might be even better to just replicate the following as the authors have done in here. What do you think?
Yes replicating this would be better. It's much more readable.
Two normalizations. First, they normalize the embeddings they get from the RN50 backbone, then they pass it through a linear layer (prototype). While training they again normalize this prototype vector.
It seems so. The first normalization is done in the forward_head
(here) and they are taking the weights and normalizing it before the start of each epoch (here).
But in the first normalization it's normalizing the embedding(128 vector) and in the second they are normalizing the weights from the "prototype" layer.
(Wrote this so that we are on same page and if I am mistaken then seeking correction.)
Right on both the fronts!
Great.
We need to trace what is non-trainable and what is. In the main SwAV code, all the variables with no_grad tag are basically non-trainable.
I might need some help here.
@ayulockin we can accomplish this using the following options I assume:
GradientTape
has got a watch
function that allows us to do this. GradientTape
context so that no gradient gets calculated in the first place. tf.stop_gradient
as necessary. But we will figure out more as we proceed.
We need to trace what is non-trainable and what is. In the main SwAV code, all the variables with
no_grad
tag are basically non-trainable.Did not get this part - "## crossentropy loss between code and p, assuming that code is to be precited from the assigned cluster. if wrong then logits will be label and vice versa". Are you meaning that the
logits
andlabels
in criterion will be swapped? If so, how?I think it might be even better to just replicate the following as the authors have done in here. What do you think?
Two normalizations. First, they normalize the embeddings they get from the RN50 backbone, then they pass it through a linear layer (prototype). While training they again normalize this prototype vector.