IcarusWizard / MAE

PyTorch implementation of Masked Autoencoder
MIT License
234 stars 46 forks source link

Custom masking #20

Open harsmac opened 12 months ago

harsmac commented 12 months ago

Hi, thanks for the code. You answered that we can modify the PatchShuffle class to create custom masks. However, the patch shuffle class takes the output of a Conv2d layer, making it hard to know precisely what part of the image we are masking. Is there any reason for this?

Originally posted by @wenhaowang1995 in https://github.com/IcarusWizard/MAE/issues/14#issuecomment-1548504418

IcarusWizard commented 12 months ago

Hi,

The PatchShuffle class is doing two things in sequence:

  1. create the mask, in which the cnn output here is only help to specify the dimensions.
  2. use the mask to mask out the input.

You can of course implement these two things separately with two classes or functions. I implemented in this way only for convinent. And it is different with the official implementation since when I wrote the code, the official one was not yet released.

And it is also very straightforward to understand which patch comes from which region of the image. Say your input is 224x224 image, and patch size is 14, then you will get a 16x16 grid of patches from the conv and each patch on this grid is from a 14x14 region from the original image without overlapping.

amirrezadolatpour2000 commented 9 months ago

Hi, thank you for sharing the code. why did not you use sine-cosine positional embedding as it is mentioned in the paper?

IcarusWizard commented 9 months ago

I don't find where they mention of using sin-cos positional embedding in the paper. Actually, the original ViT paper clearly mentioned that a "learned" positional encoding is added after patchfication. Also for images, it is not necessary to use the sin-cos positional encoding since there is no extrapolation beyond the trained length. Could you point out where you read it?

amirrezadolatpour2000 commented 9 months ago

Sure, in the paper https://arxiv.org/abs/2111.06377, on page 11, first paragraph. image

IcarusWizard commented 9 months ago

ah, I see. Thanks for the reference. I didn't pay much attention to this detail. But, as I said, I don't think it will make a large difference to the result. Feel free to experiment with that.

IcarusWizard commented 9 months ago

Also, I just checked their official code and they don't even follow this detail. The code uses the ViT model from timm which follows the details in the ViT paper with learned positional encoding.

amirrezadolatpour2000 commented 9 months ago

https://github.com/facebookresearch/mae/blob/main/models_mae.py You can see that they utilized the frozen positional embedding using the sine-cosine approach.

IcarusWizard commented 9 months ago

Ah, thanks for the correction. I had looked at a wrong file. Then I don't know why they don't like to follow the ViT architecture precisely.

amirrezadolatpour2000 commented 9 months ago

Based on what I studied, we do not have specific rules for choosing the positional embedding. However, I want to try the sine-cosine approach, and see the result. If I test it, I will inform you. I want to be sure that this implementation considers other details mentioned in the paper. I checked it, however, I want to be sure.

IcarusWizard commented 9 months ago

Oh, I don't think I followed all the details from the paper precisely. As in the readme, the purpose of this code is only to verify the idea of mae, not a replicate. For example, I think I didn't implement the normalization for reconstruction loss. There could be more details that I missed.

hugoWR commented 5 months ago

I'm my own experiments, it appears that using frozen sine-cosine positional embedding speed-up learning quite significantly. I guess it makes sense because that's one thing that the network doesn't have to learn and it can focus on reconstructing the right texture.

Anyway, I just wanted to let you know. Great repo otherwise !