mx-mark / VideoTransformer-pytorch

PyTorch implementation of a collections of scalable Video Transformer Benchmarks.
272 stars 34 forks source link

How to use the maskfeat model to imagenet dataset #17

Closed TinnyFlames closed 2 years ago

TinnyFlames commented 2 years ago

Hi! Thanks for your code. For maskfeat code, is it possible to use it on imagenet dataset to reproduce the paper's result?

mx-mark commented 2 years ago

@TinnyFlames @TinnyFlames Not straightforward, we have tried to pre-train the vit-base 100 epochs with hog prediction in ImageNet-1k under the MAE architecture. The hog targets is slightly higher than the pixel norm (82.7% vs 82.4%).

TinnyFlames commented 2 years ago

Hi @mx-mark! Thanks for your reply. Is it possible to share suggestions on implementing the mask code? I tried to implement maskfeat for Imagenet but was confused about the mask part. The image size is (224,224,3) but the hog feature size is (14,14,108). If we randomly mask image patches, how to mask the hog features correctly since the dimension is not matched? CubeMaskGenerator is a little bit obscure to rewrite for me.

RachelTeamo commented 2 years ago

@mx-mark May I ask you how you calculate your loss function in the imagenet? just as the same as this? https://github.com/mx-mark/VideoTransformer-pytorch/blob/main/video_transformer.py#L899

mx-mark commented 2 years ago

@RechelTeamo right, the loss minimizes the L2 distance between the predicted and original HOG feature.

RachelTeamo commented 2 years ago

Thanks for your answer. I try this loss function. But I meet the problem that my loss becomes NaN. I find this problem in the blocks(x). I think the problem is lr. May I ask about your lr setting? I use blr=1e-4 and batch size 1024 in 8 V100 GPU.

mx-mark commented 2 years ago

@RechelTeamo the setting for what, pre-training or fine-tuning?

RachelTeamo commented 2 years ago

for pre-training

mx-mark commented 2 years ago

@RechelTeamo There are some related problems reported in the original MAE repos. You can check if it works https://github.com/facebookresearch/mae/issues/65, https://github.com/facebookresearch/mae/issues/42

mx-mark commented 2 years ago

@RechelTeamo For my pretraining settings, the blr is 1.5e-4 and the effective batch size is 4096.

RachelTeamo commented 2 years ago

@mx-mark Oh thanks a lot. I set blr: 5.00e-05 and batch 1024 (actual lr=2e-4)now it's epoch 16. I will report my result when it is finished.

My blr=1e-4 batch=1024 (actual lr=4e-4) will loss NaN.