Hi, thanks for sharing your awesome work.
In the models_mae.py, the initialization of the PatchEmbed’s conv be like:
# initialize patch_embed like nn.Linear (instead of nn.Conv2d)
w = self.patch_embed.proj.weight.data
torch.nn.init.xavier_uniform_(w.view([w.shape[0], -1]))
As written in the comment, weights of the conv are intentionally flatten before its initialization.
So why did you make the conv weights into the shape of nn.Linear’s?
Is there any advantage of doing so?
Hi, thanks for sharing your awesome work. In the
models_mae.py
, the initialization of the PatchEmbed’s conv be like:As written in the comment, weights of the conv are intentionally flatten before its initialization. So why did you make the conv weights into the shape of nn.Linear’s? Is there any advantage of doing so?
Thanks in advance.