pengzhiliang / MAE-pytorch

Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners
2.6k stars 342 forks source link

where is the code of freezing the blocks that you don't want to finetune? #70

Closed A-zhudong closed 2 years ago

A-zhudong commented 2 years ago

Hello, thanks for your implementation.

I have read the main part of your code, but I didn't find the code that controls the Partial fine-tuning. Could you please tell me where is that part in "run_class_finetuning.py", "modeling_finetune.py" or anywhere else?

Wainting for your reply, thank you.

SUNJIMENG commented 2 years ago

@A-zhudong I have the same question as you. Waiting for reply.

SoonFa commented 2 years ago

I have the same question as you. Waiting for reply.

pengzhiliang commented 2 years ago

Hello, sorry for the late reply!

As mentioned in BEiT and MAE, in the end to end fintuning procedure, No blocks/layers need beed frozen. But the LR of each blocks in different, you can find it in here

A-zhudong commented 2 years ago

Hello, sorry for the late reply!

As mentioned in BEiT and MAE, in the end to end fintuning procedure, No blocks/layers need beed frozen. But the LR of each blocks in different, you can find it in here

Thanks for your reply.

But it seems that they did frozen some blocks and tested the effect as mentioned in "Masked Autoencoders Are Scalable Vision Learners".

Maybe it is better if we implement that part? 1640162609(1)

pengzhiliang commented 2 years ago

Oh, I am sorry that I forget this.

You just need to freeze the blocks you want to, in the init function.

A-zhudong commented 2 years ago

OK, thank you.