lxtGH / CAE

This is a PyTorch implementation of “Context AutoEncoder for Self-Supervised Representation Learning"
193 stars 22 forks source link

Request for clarification on implementation #10

Open lorenzbaraldi opened 1 year ago

lorenzbaraldi commented 1 year ago

Hi, after reading your paper and studying the code, I don't understand why VisionTransformerForMaskedImageModeling have two implementations of the encoder (respectively encoder and teacher model). Why is it not possible to use just the encoder (since it seems they both have the same parameters)?

SelfSup-MIM commented 1 year ago

Hi, it is ok to use just the encoder. The so-called teacher model is actually the same as the encoder (structure/parameter).

lorenzbaraldi commented 1 year ago

Even for pre-training purposes? I'm trying to pre-train the model from scratch and I don't understand the utility of the teacher model. Why is not possible to do with torch.no_grad(): latent_target = self.encoder(x, bool_masked_pos=(~bool_masked_pos)) instead of with torch.no_grad(): latent_target = self.teacher(x, bool_masked_pos=(~bool_masked_pos))

SelfSup-MIM commented 1 year ago

It is ok to do so. This codebase defines a teacher model class to provide other researchers with the possibility to use EMA, although EMA is not used in CAE.

lorenzbaraldi commented 1 year ago

Ok thank you for the explanation

treasan commented 10 months ago

Hi, Sorry to bring up a one year old issue. However, I am unsure about an implementation detail.

https://github.com/lxtGH/CAE/blob/d72597143e486d9bbbaf1e3adc4fd9cfa618633a/models/modeling_cae.py#L114C58-L114C58

Here you update the teacher after computing the latent targets. Doesn't this mean that the target computation uses the teacher paramters of the previous step? Therefore the teacher and the encoder parameters are not synced. Is that on purpose?